Each pack answers a distinct phase question in the care cycle. Packs 1-5 measure how care is practised. Pack 6 measures the boundary condition that keeps that care local, plural, and endable. To preserve the public promise of one headline public measure per pack, each pack has one headline public measure plus supporting diagnostics.
These metrics are designed for sufficiency, not maximisation. Each deployment context defines a threshold — "good enough" for that community. Crossing the threshold is the goal; score-chasing beyond it risks the same metric gaming the 6-Pack warns against. A headline measure is a public test, not a totalising score.
Headline public measures
| Pack | Phase question | Headline public measure | What it answers |
|---|---|---|---|
| 1 — Attentiveness | Did we look at the right things? | Representation gap | Which materially affected groups are still missing or badly under-represented in the record? |
| 2 — Responsibility | Did we make the right promises in an enforceable form? | Promise fidelity | What share of material obligations are explicitly owned, properly authorised, and kept on their published terms? |
| 3 — Competence | Did we execute correctly? | Verified execution rate | What share of audited decisions or releases pass guardrails, include a usable trace, and stay inside release bounds? |
| 4 — Responsiveness | Did the care land well, and did repair restore standing? | Trust-under-loss | After a bad outcome and attempted repair, do affected people report that the system became more trustworthy rather than less? |
| 5 — Solidarity | Is the ecosystem structurally fair? | Bridge index | Are shared decisions showing real cross-group participation and co-endorsement, rather than separate silos? |
| 6 — Symbiosis | Can the system stay bounded and still hand off or stop? | Exit readiness | Could this system hand over or shut down on schedule without rights loss, continuity failure, or recentralisation? |
Supporting diagnostics
- Pack 1 — Attentiveness. Coverage of affected people; voice equity between least-heard and most-heard groups. Evidence artifact: the bridging map.
- Pack 2 — Responsibility. Named-owner coverage; authority-match rate; adopt-or-explain rate.
- Pack 3 — Competence. Guardrail integrity; trace completeness; canary health; audit overturn rate.
- Pack 4 — Responsiveness. Appeal closure time; repair completion rate; harm recurrence within 90 days.
- Pack 5 — Solidarity. Portability success rate; accountable-identity coverage; federation participation.
- Pack 6 — Symbiosis. Scope compliance; sunset compliance; ecology diversity; handover rehearsal pass rate.
How to read them
- Trust is decomposed, not collapsed. Pack 1 asks whether people were heard. Pack 2 asks whether promises were real. Pack 3 asks whether delivery held up under inspection. Pack 4 asks whether repair worked after harm. Pack 5 asks whether groups can act together fairly. Pack 6 asks whether stewardship can remain bounded over time.
- Artifacts are not metrics. Bridging maps, engagement contracts, repair logs, and exit drills are governance evidence. They matter because they support the headline measures; they are not scores by themselves.
- Training signals are not the public audit. Cross-group endorsement may inform model tuning or routing, but its public, conference-facing role in this framework is the Pack 5 bridge index. Pack 4 owns whether affected people could correct the system and whether repair restored trust.