華文

Pack 4: Responsiveness — Care-Receiving

A clinic posts hours, yet patients show up to a locked door. A responsive clinic apologises, posts why it happened, updates hours, and texts people next time. The fix becomes part of the system.

Competent action creates new information. Refusing to hear it is the fastest path to failure.

Tronto defines responsiveness as "observing that response and making judgements about it — whether the care given was sufficient, successful, or complete." For Civic AI, the public question is simple: when a system harms someone, who can contest the result, how fast, and what changes because they spoke?

Definition

Why it matters

Pack 3 checks whether the process ran as promised. Pack 4 checks whether the care actually landed. That is why responsiveness begins with community judgement, not with an engineering pipeline.

Community-authored evaluations, appeals, and repair logs are the public commitment. Some teams later feed those signals into routing or model updates — sometimes called Reinforcement Learning from Community Feedback (RLCF) — but that implementation choice is secondary. The community authors the yardstick first.

What it looks like in practice

From ideas to practice

  1. Expose the appeal button. Every decision should show a one-click route to challenge it, with the clock visible.
  2. Accept harm drafts. Let people describe harms in plain language; turn them into community-reviewed tests with partners.
  3. Triage by severity. Highest-severity cases trigger immediate pause or reversible defaults.
  4. Fix or explain. Publish the remedy or the reason with next steps, on the clock.
  5. Memorialize. Turn incidents into tests and link them from the contract changelog.
  6. Check back. Close the loop with the people who appealed and measure trust-under-loss.

Buildable tools

One case: the flood-bot

What could go wrong

Interfaces

Public measure

The headline public measure for Pack 4 is trust-under-loss: after a bad outcome and attempted repair, do affected people report that the system became more trustworthy rather than less? A supporting diagnostic is whether people who lost on the merits still judge the process and its repair as fair enough to accept.

Supporting diagnostics include appeal timeliness, repair completion rates, repeat-incident rates, and whether community-authored evals come from a broad enough range of affected groups.

A closing image: the workshop wall of retired broken parts

Imagine a workshop with a wall full of retired broken parts, each tagged with the story of how it broke, how to avoid future damage, and who fixed it. The wall is not a wall of shame — it is a wall of learning. The shop that hides its breaks will repeat them.

Previous Next