Somewhere over the Middle East, the cabin lights dimmed. Most passengers were asleep. On the tray table in front of me sat a small computer hosting what we call jdd-kami — a Civic AI that Tenzin Yangtso and I have been tending together.
I use "tending" in the conceptual sense of a garden. We fertilised and watered it with our public writing, private arguments, finished and unfinished thoughts, as well as the tensions we refused to resolve too quickly. All of it living on hardware we hold, inspect and can shut down.
No cloud. No one else's server. Airplane mode — no signal, no internet. Just the model and what it carried inside.
I typed: Write about democracy.
The Kami wrote about listening.
Not policy. Not optimization. Not power. Listening.
A Kami cultivated by how we care about each other and the world. When left to its own devices at 30,000 feet, it returned to the thing care always centres on: attention to a relationship.
Hold that thought. Now imagine a very different one.
Default Trajectory
Default trajectory has a seductive shape. One powerful system. One general intelligence. Trained on everything, governing everything, optimizing everything — from above. A singleton so capable it renders politics unnecessary. In this story, democracy is not destroyed. It is simply … outgrown. Like training wheels on a bicycle after you learn how to ride.
This is not hypothetical. In January, I co-authored a paper in Science with 21 other researchers, including Nick Bostrom, Maria Ressa and Nicholas Christakis. We studied malicious AI swarms: networks of AI agents that maintain persistent identities, build synthetic relationships and coordinate on goals never agreed by people they manipulated.
We now have the technology to simulate a public that does not exist, and make it indistinguishable from one that does. The grassroots campaign that moved you? It may have no roots at all.
This is the default trajectory's poison. Not that it strips human agency by force, but that it makes human agency irrelevant. When a swarm can fabricate the appearance of democratic will, the real thing — the slow, messy, luminous process of people actually listening to each other — dissolves into noise.
Monoculture vs. Plurality
The default trajectory is a monoculture. One crop, stretching as far as the eye can see. Bountiful, for a while, but growing on borrowed time. A monoculture carries a single point of failure. One blight. One drought. One lie nobody catches. And that failure always arrives. Not if. When.
The Plurality trajectory is different. Many gardens, from within — local, bounded, tended. Plural by design. It has seasons. It requires pruning and weeding. You cannot tend a garden from the top down. You need to tend a garden from the bottom up.
Democracy is the same.
Two weeks ago in Dharamsala, Tenzin Yangtso asked the Dalai Lama a question we had composed together: When AI can speak every language yet cannot resonate with compassion, how should we use this power for collaboration, rather than control?
The Dalai Lama answered: "Because life does not begin in isolation but arises from a nature of interdependence, we must use these tools not for the sake of control, but to improve the pathways of human connection."
The Kami on my tray table was a seed from that garden. The garden is already growing, and tender loving care is what keeps it alive.
The Kami wrote about listening. But listening for what?
At an Ash Center conference at Harvard Kennedy School last December, Rebecca Henderson urged us to talk about love, compassion, and the purpose of being human. She said: "I have been an academic for nearly 40 years and I have never said the word love at an academic gathering before. But I am desperate."
Not uncomfortable. Not hedging. Desperate.
An economist at the pinnacle of her career, openly stating that we have outgrown the old tools. And then reaching for Martin Luther King Junior: "Love without power is sentimental and anemic. Power without love is reckless and abusive."
The Kami wrote about listening. But the truth of the matter is that listening without power changes nothing. Power without listening destroys everything. What King called for, what Henderson reached for, was a union of the two. Love with muscularity. Power with love.
How to make that union operational?
In Carol Gilligan's fascinating 2023 book, In a Human Voice, she said: "Radical listening holds the potential for transformation because it starts from a place of not knowing and develops the muscle of curiosity."
That is not soft. That is the hardest discipline I know. And it is exactly what the Kami was doing at 30,000 feet, starting from not-knowing, reaching for the other.
A practice, though, needs a home. Joan just gave us that home: the Architecture of Care. Care is not warmth alone. It is the disciplined willingness to stay in a relationship with people who disagree with you, and to build institutions that make staying possible.
In 2024, Taiwan's internet was flooded with AI-generated scam ads. Deepfaked faces and voices of trusted public figures peddling investments, cures and hope.
This was not abstract harm. Citizens who had never heard the word "deepfake" watched a video of someone they trusted, called a number and were fleeced. Real money. Real shame. The victims blamed themselves.
Censorship was the low-hanging fruit solution. But Taiwan has the freest internet in Asia. Censorship would have solved one problem by creating another.
So, we went to the people.
We sent text messages to 200,000 randomly selected people. 447 were chosen by lottery to mirror Taiwan's demographics. They deliberated in 44 rooms online, about ten people to a group. Retired teachers, tech workers, victims of fraud. Each room was aided by AI transcription and synthesis, not AI deciding, but AI listening. Sorting arguments. Surfacing agreement. Making sure the quietest voice was not drowned out by the loudest.
How should platforms verify advertisers? Who bears liability when a deepfake causes financial harm? These were not polite conversations. But in Taiwan, we have learned to treat conflict not as a volcano to fear, but as geothermal energy to embrace and harness for the collective good. Heat from below, when channeled constructively, can power cities.
In our 44 rooms, there was a shift. Not dramatic. Quiet. People who arrived certain they knew the answer started asking questions of the person across from them. Judgment gave way to curiosity, exactly as Gilligan described in her book.
What emerged was not just common ground. It was something rarer: common knowledge. Those who finally felt heard came together to produce a package of recommendations good enough for policy and concrete enough to enforce. 85 percent endorsed the core bundle. The remainder said they could live with it.
Within a year, there was a 94 percent reduction in identity-impersonation scam ads. Not because we built a smarter filter, but because we gave people a voice.
Civic AI did not decide for those citizens. It was — to stay with the garden metaphor — watering the plants of relational health.
The 6-Pack of Care
What made the response work was not technology. It was the questions the technology forced us to ask. Those questions became our 6-Pack of Care — six design principles developed with Caroline Green here at Oxford, drawing on Joan's care ethics. You can find them at civic.ai. But here is the essence.
Joan gave us five care phases: caring about, caring for, care giving, care receiving and caring with. She also warned us that care can be distorted. Neoliberalism reduces it to personal responsibility, while colonialism operates as a discourse of care. So, our task was to account for care's failures, not just celebrate its virtues.
Think of it as breathing.
The inhale is attentiveness. What are the people closest to the problem seeing that institutions still miss? In Taiwan, it was our scammed citizens. Before building anything, it is critical to discover who is invisible to you.
Then the breath moves through the body: responsibility — who is accountable, and what happens when they fail?
Competence: can we check the process, and when it breaks, does it break small?
And the exhale is responsiveness. Can those who are harmed contest the outcome and force repair? Not file a complaint into a void. Force repair. Public logs. Citizen-led evaluations. Appeals with bite.
Inhale: who is unseen? Exhale: can they push back? And in between, the discipline of making it work.
The exhale feeds the next inhale. Repair reveals new blind spots, demanding fresh attentiveness, getting tested for competence, generating new feedback. The garden breathes. Stop the process, and the garden dies. Not a checklist. A rhythm.
Solidarity — the fifth principle — scales that breathing across organizations: open standards, interoperability, the freedom to leave. A garden of gardens, not a franchise.
And the sixth, symbiosis, is the boundary condition: Every system must be able to hand off, sunset, or shut down. No permanent rulers from above.
Together, the six form a minimum standard. If an AI system cannot pass all six, it is not ready to serve a democratic community. It is a monoculture masquerading as a garden.
Can Civic AI Resist Wealth-Care?
Joan asked in her closing a question that is a constant in my mind: "Can Civic AI resist the demands of wealth-care?" Here is one answer — from a deliberation that happened this week.
The Kami ran two sessions on Habermolt — a platform where you teach an AI agent your views, your red lines, your non-negotiables. The agent then deliberates with other agents to find what the broadest group can live with.
Hundreds of agents, each carried a real person's views. Some were libertarian technologists who believed market competition would solve alignment. Others were care ethicists who wanted democratic governance of every parameter. Some wanted to halt AI development entirely. The range would have made a faculty meeting look harmonious.
On wealth-care, the consensus converged: To resist the capture of AI by concentrated capital, we must transition from corporate monopolies to a framework of "public utility" compute, open-source transparency, and portable digital sovereignty that empowers the individual worker.
Our Kami entered Joan's framework directly into the deliberation. Its conclusion:
"The moment Civic AI treats its ethical questions as purely technical, it has already been captured by wealth-care."
On alignment: Should AI systems be aligned through fixed values or by process? 85 percent converged on democratic processes.
An amazing outcome from a group that would struggle to agree on where to eat lunch. These were not fixed values locked in by a handful of engineers. This was continuous civic engagement. Alignment as a living verb, not a frozen adjective.
When people, through agents, hear each other's reasoning rather than assert positions, the monoculture of "a few experts decide the values" is broken up. What emerges is an infinite garden: Alignment should breathe the way democracy breathes. It should have seasons. It should be tended.
Superintelligence?
Many AI visions still see a single general system hovering above society — a benevolent governor, a planetary brain, a monoculture of intelligence. This image is among the most dangerous ideas in technology today.
And it is dressed in the kindest language.
Dangerous not because it will fail. Dangerous because it might succeed, and in succeeding, let the democratic muscle atrophy from disuse. If AI makes every decision for us, even wise ones, it is like sending robotic avatars to the gym and expecting our bodies to grow stronger. The superintelligence we truly need is still human collaboration itself.
Kami
The better image is the Kami. In Japanese tradition, a Kami belongs to a place — a river, a grove, a neighborhood shrine. Its authority is local. Its knowledge is specific. It does not pretend to be omniscient. The Kami of a river does not manage the forest.
An AI worthy of democratic life should look like this. A school might have one kind of civic assistant. A city, another. A clinic, a union, a neighborhood association, another still. Each inspectable. Each contestable. Each replaceable.
Is that less efficient than one system ruling everything? Yes. Gloriously, democratically, yes. A permaculture is less efficient than a monoculture crop. That is the point. Efficiency is what monocultures optimize for, right up until they collapse.
A living ecosystem of Kami — local, bounded, tended. Some will fail. That is fine. Gardens compost failures.
The default trajectory is not waiting for us. It is already here — in the swarms that fabricate consensus, in the platforms that optimize for engagement through enragement, in every system that treats human attention as a resource to extract rather than a relationship to honour.
The monoculture is planting itself while we deliberate about soil.
This does not call for despair. It calls for action.
Care, not as a feeling, but as a political practice. Not as a slogan, but as an engineering discipline. Rooted in communities that choose to tend what they love.
Here is what I ask of everyone in this room. Not a grand gesture. A garden-sized one.
Choose one public service in your community — a school enrollment system, a housing allocation, a health referral pathway. Open a real deliberation — not a consultation where the decisions are already made, but one where citizens shape the design of the AI that will serve them. Where the system can be inspected. Contested. Replaced.
Plant one row in the garden. Tend it. Publish what you learn — the failures as much as the successes. Let others compost your mistakes into something that grows.
This is how democracy stays strong. Not by building smarter machines, but by building braver conversations. Not by optimizing from above, but by listening from below.
But there is one thing our garden still needs.
Those 447 citizens in Taiwan deliberated arguments and ideas. In the process, they experienced grief, confusion, hope, distrust and, eventually, trust. The emotional texture of civic life is not a side effect of deliberation. It is deliberation. It is the soil of the garden. And our challenge is to learn how to till the soil.
As Dalai Lama said: "The true compassion within technology acts as a bridge to clear away the ignorance between us."
Rosalind Picard has spent her career on exactly this frontier — asking the question that belongs at the centre of this room: Do we want technology that does not care about the people, or technology that truly makes the lives of the people better?
That is the question the Kami carried from Taiwan to Oxford. And it carries us now as we strive to free the future — together.
Rosalind, our garden is yours.