• (Communal AI for a Plural World)
  • Good local time, everyone. Thank you for welcoming me. I am happy to be here, learning from experts and many good friends from all over the world, spanning many decades across diverse fields—policy, academia, religion, technology, and culture—united by the goal of anchoring innovation in faith and the family.

  • (The Way Begins At Home)

  • I have been practicing Taoism since I was four years old. Taoist teaching is based on the Tao, or “the Way,” which begins at home. It scales from our family to the community, the country, and then the world. The family is the first community.

  • This is a point where Taoism and Confucianism agree: The family is the primary site where we learn the Way through care. I understand that the center of gravity for this network is also the family and the protection of youth.

  • However, in Taiwan, as everywhere else, we have witnessed a migration of attention over the past decade. The space of the family is increasingly mediated by systems running on what I call the MaxOS —not macOS—but the Maximization Operating System . This system is designed not for relational health, but rather to maximize a single metric, such as engagement, often achieved through enragement.

  • (An Anxious Family)

  • The application of MaxOS to human relationships brings about what I call the “human in the loop” of AI. This dynamic is like a hamster in a hamster wheel. The hamster runs faster and faster. Perhaps it needs the exercise and feels great, but it has absolutely no control over where the wheel is going—which is, in fact, nowhere.

  • Because of this, starting ten years ago, Taiwan suffered from an extremely polarized environment—perhaps more so than any other country. According to V-Dem, we were top of the world receiving polarization attacks. This resulted in a very high PPM , or polarization per minute , over the last decade.

  • The profound anxiety caused by that high-PPM environment triggered a civic movement in Taiwan ten years ago. Half a million people took to the streets, peacefully occupying the parliament for three weeks. We built prosocial media to replace antisocial media, transforming the volcanic energy—the magma—of social conflict into co-creation.

  • Now, as we look around the world, we see people everywhere affected by this high PPM. With GenAI now thrown into the mix, this is exacerbated. I was just speaking with my colleague at cip.org, the Collective Intelligence Project. We just ran another iteration of our bimonthly conversation with people around the world. We found that around one in seven people report that a close friend of theirs shows signs of a reality-distorting experience due to AI chatbot interactions. And nearly one in ten now say they have little to no control over their AI conversations. It is like the hamster wheel; the wheel is turning them, rather than them turning the wheel.

  • So, I wonder what kind of companionship these AI companions are truly offering. Is it more like inviting strangers, programmed with this parasitic logic, into the most intimate spaces of childhood?

  • (A Brittle Alignment)

  • Now, I want to share some stories of how Taiwan tackled this issue through curriculum changes toward deliberative democracy. I’d like to leave about ten minutes for conversation after the presentation.

  • One simple example from last year involved deepfake scams. These scams used synthetic voices to mimic distressed children or fabricated videos of celebrities, such as the Taiwanese CEO of NVIDIA, Jensen Huang.

  • The safeguards on social media and other platforms featuring such advertisements are quite brittle because the alignment process is very vertical.

  • A handful of private labs outside of Taiwan define what counts as “aligned” messages or advertisements. Structurally, this system is inattentive and incapable of honoring the specific traditions and expectations of local people.

  • Therefore, we want to transform this brittle, vertical method of top-down control into what we call an “attentive turn.”

  • (The Attentive Turn)

  • Last March, we turned our attention to the deepfake scam issue because it was crowdsourced as the most pressing problem caused by AI. This approach is about deciding with the public where to draw the red line, rather than deciding for them.

  • We sent 200,000 text messages to random telephone numbers across Taiwan asking a simple question: What should we do together about deepfake fraud? People offered ideas, and dozens signed up for online conversations that we call an alignment assembly —an online citizen assembly. We chose 447 people who were statistically representative of the Taiwanese population in terms of occupation, gender, location, and so on. In groups of ten, they held conversations.

  • Instead of prioritizing polarized opinions, only the ideas that resonated sufficiently with the group could pollinate outside that room. For example, one room suggested labeling all social media advertisements as “probably scam” until someone provides a digital signature—a “know your customer” (KYC) verification. Another room proposed that if someone lost $7,000,000 to an investment scheme advertised on Facebook via an unsigned post, Facebook should be liable for the $7,000,000, not just fined. Another room suggested that if a company like TikTok (ByteDance), which at the time did not have an office in Taiwan, ignored our liability rules, we should not censor their content, but rather throttle connections to their video servers. This way, TikTok’s competition would gain their advertising business.

  • After about an hour of discussion, we began weaving together people’s ideas using language models. The 45 rooms cohered around a core bundle of proposals, and experts answered questions about their feasibility in a plenary session. At the end of the day, people voted. We showed the three parties in our parliament that more than 85% of this “mini-public” agreed these were very good ideas. Of the 15% who did not think they were the best ideas, they could still live with them; they were not terribly unhappy.

  • This package was quickly passed by the parliament because any MP who did not support it would be labeled as “pro-fraud,” and nobody wants that reputation.

  • The conversation took place last March, and the two resulting laws were passed in May and July, respectively. Throughout this year, there have been virtually no deepfake or fake ads on Taiwanese social media.

  • (6-Pack of Care)

  • This demonstrates the power of attentiveness . It is a kind of geothermal engine for facing conflict, turning division into the energy of co-creation. It involves “writing the air.” This is different from the Japanese concept of “reading the air”—where everyone guesses what the social norms around emerging technology should be. Instead, we collectively engage in sense-making and write the air, ensuring everyone shares the common knowledge of what society prefers.

  • This practice is not a one-off; we have been doing this for more than ten years. In Taiwan, over 100 such collaborative meetings rebuilt trust in the presidency, raising it from just 9% in 2014 to more than 70% by 2020.

  • I have been working with Oxford to turn this “6-Pack of Care” into a more comprehensive care ethics framework. This supplements the usual methods of aligning AI technologies, which are typically utilitarian (maximizing a number) or deontic (following rules). The ethics of care is fundamentally a loop consisting of:

    1. Paying attention to needs early.
    2. Taking responsibility and identifying who should act.
    3. Competently delivering that care in a local context.
    4. Being responsive and adjusting when feedback indicates we missed the mark.
    5. Solidarity , protecting the most vulnerable (especially youth) over engagement metrics, using, for example, shared meronymous infrastructures.
    6. Symbiosis , honoring different moral frameworks by design, keeping them as local as possible.

  • We have a helpful illustration of this care loop on the 6pack.care website. The core idea is very simple: alignment is not thin . It is not something set from the top by an abstract “we.” Rather, it is operationalized stewardship that aligns with a local process —specific to particular communities and families—instead of a universal standard.

  • (From Hamster Wheel to Steering Wheel)

  • This approach can turn the hamster wheel (human in the loop of AI) into a steering wheel ( AI in the societal loop ). This is an interesting inversion, because AI in the loop of humanity prompts different questions.

  • For example, instead of relying on a single, vertical authority to fact-check social media conversations (which does not work), we should ask: What horizontal mechanisms allow community notes that are upvoted by both the left-wing and the right-wing—“up-wing”—to float to the top, even among people who otherwise disagree? We can apply this to the social media feed algorithm itself. That is prosocial media.

  • This is the model we learned from social media, which we are now applying to AI: shifting power outward to communities and making this collective steering public, portable, and plural.

  • (Public Specs with Citations)

  • Some recent projects I have been involved in center on the idea of public model specifications with citations. As many of you know, Frontier AI Labs are already training AI systems using “constitutions” or “model specifications”—plain language descriptions of intended behavior. The problem is that these are not truly verifiable. If a chatbot outputs something that seems to violate its model spec, you cannot really get an explanation. If you do ask, it might hallucinate a response, but it does not reveal how it actually works internally.

  • As part of ROOST (the Robust Open Online Safety Tool), we have been working with frontier labs to create open models. One released just last week was the safeguard model from OpenAI. It is the first reasoning model for trust and safety judgments that comes with a full reasoning trace. The idea is that this model, which is small enough to deploy on a laptop or a community server, can ingest communal policies. It is “Bring Your Own Policy” (BYOP) .

  • One community may have a policy around creation care, while another may have a norm around climate justice. The model itself is not opinionated. It acts as a safeguard for other AI systems or chatbots, gauging its reasoning process against the output of those systems — or humans, if used as a traditional trust and safety engine.

  • This transforms alignment from blind trust in a global model spec to verifiable stewardship held by each community. The ability to provide precise citations to the community spec is very powerful. We want to make this the norm for participating social media platforms, including Discord, Bluesky, Roblox, and others. This is a very interesting new development.

  • (Portable Policy for Interoperability)

  • Another development involves policies for portable interoperability. I know many of you here already advocate for portability across social media companies, similar to how we keep our phone numbers when switching telecom carriers. This means being able to move our data, our community, our shared safety settings, and our communal specs between platforms.

  • This is fundamental because it forces companies to compete on the basis of care, rather than capture. I am happy to report that the Taiwan AI Basic Act now contains a key clause on industry-wide data reuse and interoperability. We use the metaphor that the information highway must always be built with both on-ramps and off-ramps.

  • We have an upcoming Data Innovation Act, on which I suggested provisions similar to the Utah Digital Choices Act, applying not just to social media companies, but potentially to AI companies as well. That law has been drafted by the ministry and will be brought to the parliament later. This is another very interesting recent policy development.

  • (Pluralistic Community Models)

  • Finally, the one-size-fits-all idea is giving way to community-scoped agents. I recently spoke at the Social Enterprise World Forum about the idea of local kami ; this is the Japanese concept of a steward spirit. It is not an all-knowing or all-acting centralized deity, but rather a local spirit stewarding a specific forest or river. When it safeguards a specific village or kinship group, it is called the Ujigami. The idea is that it is bounded by community values and accountable only to that community, without trying to maximize any universal metric, like paper clips.

  • I think this idea of pluralist community models can overcome many fears surrounding superintelligence, vertical takeoff, and recursive self-improvement. Instead of just calling to “stop AI” or “control AI,” or trying to halt acceleration, it makes more sense to change direction.

  • Instead of racing toward a cliff, running quickly but losing the steering wheel, we can provide a way for each community to have its own steering wheel, ensuring that technology honors human dignity and remains under communal control. This is the idea of communal AI.

  • (Exercising our Civic Muscle)

  • These pillars are just the beginning. The steering wheel has many components, but we must, of course, drive together.

  • In digital democracy settings, there is a current tendency, instead of using assistive intelligence to help people listen to each other better and build bridges, to have chatbots interview people individually and then have those chatbots deliberate to produce coherent policy. Research even shows these are fairly high-quality policies.

  • The problem, of course, is that this is like sending robots to the gym to lift weights for us. It might be impressive—they can lift a lot of weight—but our civic muscle will atrophy.

  • Our capacity for attentiveness, deliberation, and care is the real target. Relational health is the goal. The better policies produced by Taiwanese digital democracy are almost just a byproduct of that relational health and specific care. That is, I think, worth fighting for.

  • (The Plurality is Here)

  • Finally, I will end with a prayer. It served as my job description when I became the digital minister in 2016. “Shu-Wei” (數位) in Taiwan means both digital and plural, so my prayer goes like this:

  • When we see the Internet of Things , let’s make it the Internet of Beings .

    When we see virtual reality , let’s make it a shared reality .

    When we see machine learning , let’s make it collaborative learning .

    When we see user experience , let’s make it about human experience .

    And whenever we hear that the singularity is near, let us always remember the plurality is here.

  • I think what is implied by plurality and communal AI is that there is also an element of subsidiarity as well. Could you talk a little bit about that?

  • Certainly. Subsidiarity is implied in the local “Kami” idea. If a river Kami can resolve issues concerning that river, you certainly do not go to a global Kami. In fact, in the Japanese system, there is no global Kami. There are just many overlapping circles, and disputes are resolved through protocols that bridge those different communities. For example, you can easily imagine, in addition to a “creation care” safeguard Kami and a “climate justice” safeguard Kami, another Kami that specifically translates between those two epistemic norms without appealing to a higher authority. I think that is an excellent implementation of the Ostromian idea of managing the commons at the lowest possible level.

  • Regarding AI infrastructure and architectures, I am very intrigued by active inference. Is that a methodology you would utilize in what you are talking about? It involves a more present type of sensor-based architecture that does not necessarily use the environmental resources that large language models often require. What do you think about active inference?

  • I was just in Kyoto a couple of months ago for the Artificial Life conference. It was interesting to see many participants tapping into the idea of symbiogenesis. Blaise Agüera y Arcas gave a keynote there, arguing against training a super large model from conversations around the world and forcing it to act in a dyadic way. That approach does not work for both energy-based and norm-based reasons. It is much better to go the other way around: to co-evolve. We should not treat AI agents as oracles, but rather put them into a “civic gym,” a gymnasium, ensuring they can co-evolve with other AI agents and humans. This is called organic alignment. The Softmax team was also there and gave a technical demo.

  • Many of these concepts are very much in line with active inference ideas, just expressed with different terminology. There are many different strains in the community looking at different parts of the elephant in the room. But the idea of symbiogenesis—creating more complexity while simplifying things by merging into entities that are indispensable to each other—is a very powerful unifying metaphor.

  • I wanted to hear you elaborate on something I had not heard you say before. You mentioned three ways of thinking about how an AI can operate. You mentioned utilitarian, where it is optimizing for a number, and secondly, where it is deontic and rule-following. Can you say a bit more about the third way? I loved your example, but is there a technical distinction, or is it more in how the technology is used? I am curious to hear more about how we might move more AI systems into the care category.

  • Yes, there is a technical distinction. An agent using utilitarian logic aims to maximize a score without being too particular about the instrumental methods used to achieve it. However, as the standard AI risk discourse notes, the best way to maximize a score often involves power-seeking and recursive self-improvement, leading to extinction risks. This is why people introduce rules (deontic constraints). But the problem is that if AI systems are fast enough, they can outthink us and find perverse instantiations of those rules—following the letter, but not the spirit.

  • I liken it to trying to align a garden. We are much slower compared to silicon-based agents. If the garden is misaligned and seeks only to maximize a score, it might bulldoze itself every millisecond to win some random metric.

  • But in real life, that is not how gardeners operate. Gardeners work to the tune of the garden, at the speed of the garden, ensuring they are attentive to the relational health of the various species within it. Permaculturists often do not even prune.

  • These are metaphors, but there is technology to express this. We train agents in a way that prioritizes the relational health of the alignment process itself. It is alignment to a process rather than alignment to a maximizing score. The relational health of that process is measured in a high-bandwidth, low-latency way by the actual participants. Therefore, the way for the AI agent to game the system and maximize the relational health of the process is, paradoxically, to make itself more aligned with the humans.

  • That is the symbiogenetic idea. Harari argued that wheat or corn co-domesticated farmers; this is how a slower community can align a faster community. I go into the philosophical details on the 6pack.care website. I also recently had an insightful conversation on LessWrong with an EA-adjacent researcher named Plex about symbiogenesis and whether it can counter the forces of the “weeds”—convergent consequentialism and instrumental power-seeking moves. I refer you to those resources.

  • I am thinking of faith and the family. What words of advice would you offer as we build a movement to strengthen the triad of faith, family, and technology together?

  • Culture is key. Fashion is culture; sports are also, in a sense, culture. If we can build a movement where people genuinely enjoy being together—whether in person or through high-bandwidth online interactions—this will resonate with the focus on relational health prioritized by Gen Z and younger generations. We should double down on the cultural aspect. Many people now experience spirituality as a cultural experience, even if they do not adhere to organized religion, and these people belong in this movement as well.

  • It occurred to me that the positioning of these AI technologies has been focused on optimizing toward a reward system. As you speak, Audrey, I find it interesting that we are focusing significant energy on very low-complexity solutions. We are potentially underrepresenting the true relationality and dimensionality of the systems we interact with. Culture, for example, is a massively balanced system. People who do not want to do the patient work of understanding often seek the power of that system or try to dominate the conversation. Those of us who see what is at risk need a vision for how the reward system and the development of AI can deal with the complexity of the world it operates in without declaring victory too early. Is that fair?

  • Yes. I call myself a good enough ancestor . Because a perfect ancestor will foreclose future possibilities. And I want good enough AIs as well.

  • Thank you. Live long and prosper.