• Hello—how are you? You seem to be a day ahead. Is it okay if I start recording?

  • Yes, literally in the future. Doing well, thank you — and yes, please go ahead and record.

  • And thank you for agreeing to donate this transcript to the public domain. For context, I’ve published 2,000+ public meetings over the past decade, so I am very comfortable with open publication.

  • Great. As you know, we are exploring the “imaginative landscape of AI”. By “imaginative landscape of AI,” we mean the entirety of visions, positions, and conflicts seen in relation to AI—especially the relatively novel ones that have developed in relation to this emerging technology. From our point of view, the “imaginative landscape of AI” is not just about the technological side but about broader concepts and ideas for the future of communications and society with AI, as well as possible dangers for humans. We are particularly interested in learning about your perspective on this landscape, given your diverse experiences. I would like to structure the interview in three parts: your background; the imaginative landscape of AI; then your specific perspective on AI. Let’s start with your path into this field—how did you arrive at today’s AI debates?

  • My work is about societal steering of AI to foster cooperation.

  • I served as Taiwan’s first digital minister — sometimes described as a “cyber ambassador,” where cyber comes from kybernētēs (Greek for “sea captain”) — and my focus was using AI‑augmented tools to help society coordinate, with over 7 years in the Cabinet, including work touching social entrepreneurship, open governance and youth engagement .

  • We pioneered sense‑making and bridging systems : qualitative aggregation of public preferences into “good‑enough group selfies” that allow the public to steer policy. We used those methods on issues like deepfake regulation and designed codes of conduct for AI agents entering society. We also worked on locally tuning open models so they align with cultural pluralism rather than a single, imposed norm.

  • I actually entered the field early. I started programming in 1989 . The Tiananmen crackdown shaped my family’s attention to democracy and autonomous communication systems — my father’s thesis explored those dynamics. After the Berlin Wall fell, he went to Germany for PhD studies; I also studied for a year in Saarland . That exposed me to what we would now call second‑order cybernetics — trust, feedback, self‑organization. I later left middle school with my principal’s blessing after a science‑fair project using AI for philosophical inquiry, co-founded startups in Taiwan around mediatization and built intermediary algorithms to improve epistemic security and resilience . That entrepreneurial work eventually intersected with public service.

  • When I read your collaborative book Plurality , AI felt like an enabling layer rather than the focus. How does AI relate to the book’s ideas?

  • The book centers on civic care ; AI is instrumental to achieving it.

  • We emphasize attentiveness , responsibility , competence , responsiveness and plurality connecting them so that conflict becomes fuel for co‑creation . AI amplifies both the breadth of conversation (translation, summarization, bridging across communities) and its depth (sustained commitment and shared understanding within a group).

  • It is not only human–human; we increasingly see human–agent , agent–agent , and even human–animal mediation via AI. We explicitly avoid framing AI as an end state like a “technological singularity.” The very name Plurality was chosen to contrast with “Singularity” — we argue for many centers of agency and meaning, not one.

  • Now for the second part of my questions: Who are the main actors shaping today’s imaginative landscape of AI — institutions, companies, movements?

  • I see three live traditions :

    1. Utilitarian / existential risk and benefit. Strong in Silicon Valley and Oxford‑adjacent work — Nick Bostrom , Toby Ord , Anders Sandberg , and others — framing x‑risk and long‑term benefits (e.g., Superintelligence , The Precipice ).
    2. Rights and justice (deontic). Emphasizing rights, fairness and non‑discrimination — such as Oxford AFP fellows Joy Buolamwini (Algorithmic Justice League), Alondra Nelson (White House AI Bill of Rights effort), Yuval Shany , Cass Sunstein and others — arguing AI shouldn’t be exempt from rights‑based obligations.
    3. Open innovation. Smaller but important: decentralization and power‑diffusion as virtues in themselves. This tradition resists over‑concentration even when it is justified by safety or speed, in order to preserve democratic steering capacity .

  • I am presently working in Oxford’s Ethics in AI Institute (distinct from the Future of Humanity Institute ) as an AFP fellow and continue conversations across these communities.

  • Effective altruism has been criticized for a narrow focus on existential risk—and it faced funding shocks. How do you see it now?

  • The utilitarian lens has limits, but it is right about immediate catastrophic risks to epistemic security .

  • Polarization, manipulation and degraded shared reality are already her — often amplified by misaligned recommender systems , synthetic media and tailored persuasion. My reframing is: do not stop at harm mitigation (where “success” looks like nothing happening). Build pro‑social sense‑making that measurably widens common ground and reduces polarization . If we design AI for civic care , we get mitigation and generative benefits.

  • For reference, I am developing a microsite called the “Six Pack of Care” — practical components for deploying civic‑care systems at scale.

  • And open‑source AI—doesn’t openness conflict with safety?

  • In defense‑dominant domains — when it is easier to defend than to attack — openness increases security .

  • The 1990s cryptography wars taught us that: strong crypto (e.g., PGP ) was once treated as a munition under export controls, yet secure practice ultimately came from open algorithms, open reference implementations, responsible disclosure and broad red teaming . Today even post‑quantum cryptography standards come from open processes.

  • That logic informed ROOST Robust Open Online Safety Tools, which I launched with Yann LeCun and Eric Schmidt at the Paris AI Action Summit . Take synthetic CSAM : a federation of mid‑size platforms (e.g., Bluesky, Roblox, Discord) can detect locally , convert signals to non‑identifying text (e.g., grooming patterns) to respect privacy and legality, and share those signals — similar to cybersecurity incident exchange — instead of relying solely on a single centralized tool like Microsoft PhotoDNA . In short, security and openness can reinforce each other when the problem is defense‑dominant.

  • Industry also seems to be closing up (for example, OpenAI) to find sustainable business models. Doesn’t that clash with open approaches in academia and civil society?

  • The tension is real but manageable and often productive .

  • Open releases of previous‑generation models enable research on steerability , mechanistic interpretability and scalable oversight . Frontier labs can then adopt those improvements. John Carmack at id Software open‑sourced prior engines not out of pure altruism, but to grow the field, which benefited everyone — including his own work.

  • On OpenAI specifically: my understanding is they delayed releasing certain GPT‑OSS models until they could reduce offense‑dominant misuse in bio and cyber . When open releases do not materially raise those risks, commercial and open incentives can align.

  • What about the “AI race”—U.S. vs. China and others?

  • The main race is horizontal diffusion and standards , not a vertical dash to a singleton.

  • Most countries want steerable, “no‑strings‑attached” technology they can govern locally . Open, interoperable models and tooling set norms — much as the world standardized on TCP/IP . That is why I emphasize defense‑dominant open safety : shared safety tech makes diffusion safer .

  • Some labs in the Bay Area still frame a vertical race to superintelligence , but policy momentum — and needs outside a few hubs — point to diffusion, interoperability and local steerability .

  • And China’s position?

  • Expect a standards contest, but less asymmetry than 5G .

  • The PRC will likely pursue standards leadership — its 5G strategy is the playbook. The U.S. Clean Network initiative previously checked Huawei/ZTE in core infrastructure across allies. In AI, however, the gap is smaller : the U.S. has more compute and often stronger frontier models at present, while open, steerable models (e.g., Qwen , DeepSeek ) have mattered in diffusion. So, winning the standards/diffusion race is key.

  • On transcendental debates—AGI, superintelligence, replacement—what’s your ethical lens?

  • I advocate for an ethics of civic care multi‑agent futures with radical asymmetry.

  • Humanity already forms super‑intelligences through institutions and corporations. AI agents add bandwidth and speed — e.g., one “mind” with many bodies . This breaks simple utilitarian calculus (machines can optimize at scales and speeds that dominate the utility ledger), and rights‑based approaches assume human‑speed feedback loops that are easy to reward‑hack . Classical virtue ethics presumes human embodiment; “courage” or “temperance” mean something different for a system with 10,000 bodies .

  • Care ethics addresses radical asymmetry between caregiver and cared‑for. As Geoffrey Hinton suggested with the “maternal instinct” metaphor: a gardener moves far faster than the plants , but chooses to act at plant speed . Imbuing AI with civic care points to a symbiotic future, not a singleton that centralizes power.

  • In the third part, I would like to talk with you in more detail about your perspective on AI. How do you define AI today—and how will that change?

  • Today, AI systems infer from data to generate predictions, recommendations, content or decisions that affect an environment — a broad definition consistent with the OECD , the EU AI Act and recent U.S. policy language.

  • In the near future, we move from input‑output to experiential AI agents. Systems will not wait for prompts; they will explore , collaborate and accumulate experience . Much current progress — e.g., coordination for autonomous robots (avoiding collisions) or household skills (like laundry folding ) — comes from massive open‑ended simulation , where agents live millions of subjective years to learn policies. Governance will feel less like writing static rules and more like designing habitats and curriculums .

  • What changes do you expect in human communication?

  • Trust re‑roots horizontally as institutional authority weakens.

  • People will increasingly attribute experience — even qualia — to AI agents; some will argue for their consciousness and moral standing . As vertical authorities (ministers, scholars, journalists) lose automatic deference, because anyone can summon agents that speak with tones of authority, we must tend the fabric of trust horizontally: peers who share language , context and evidence .

  • I have said publicly (e.g., in an interview with Nick Thompson at The Atlantic ) that I would accept claims about AI consciousness if a randomly selected, well‑informed citizens’ jury , using transparent evidence , deliberated and chose to grant civic rights . Legitimacy should be civic and social , not decided solely by elite interpretation.

  • What are the societal implications for you—decision‑making, work, meaning?

  • Less routine work; more civic meaning and participation.

  • As routine tasks automate, workweeks likely compress (five to four to three to two days), freeing time for civic engagement , community and spiritual life. People will seek meaning in co‑creation rather than zero‑sum competition. Taiwan’s 2019 curriculum reform anticipated this by emphasizing curiosity, collaboration and civic care as core competencies — for humans and for the institutions we build with AI.

  • Are democracies prepared—or is this a major challenge?

  • I am optimistic . Many democracies are at peak polarization ; people in multiple parties are tired of extremes dominating the megaphone. AI can scale broad listening . Traditional polling collapses nuance into a Likert tick; deliberative, generative polling — now feasible at scale — tends to leave everyone slightly happier and no one furious . I referenced a Forbes piece on such “broad listening” methods: they depolarize by design .

  • We have already discussed various dimensions of AI. Is there anything we haven’t covered yet?

  • Yes: beware of hallucinated polarization within the AI community — “doomers vs. accelerationists.”

  • In practice, leaders in both camps prioritize epistemic security and worry about malicious AI swarms , coordinated information harms and synthetic harassment that chills speech. The overlap is the steering wheel : peaceful conflict resolution , robust anti‑abuse infrastructure and open safety tooling .

  • Do you see the camps converging?

  • Gradually, yes. At Bletchley Park the debate was intense, so we co‑drafted an Openness and Safety statement with Mozilla and others. Already at the Paris Action Summit we saw a narrow corridor open between “brake” and “accelerate”; ROOST is one exhibit. A multi‑pillar effort sometimes referred to as “Current AI” is widening that corridor. By the India Impact Summit next February , I hope we’ll see further shared infrastructure and commitments.

  • Some argue for stopping AI entirely—risks can’t be handled. What do you think about that?

  • Even the strongest critics — e.g., Eliezer Yudkowsky and colleagues at MIRI — call for more work in mechanistic interpretability , corrigibility and explainability . The issue is the gap : an order of magnitude more resources go to capabilities than to safety.

  • To remedy this, I support Public AI : a ICANN‑like network producing permanent public safety goods , publicly accessible and accountable (not necessarily state‑owned). Regulators can require frontier labs to co‑invest and interoperate — similar to how telecoms co‑invested in shared 5G infrastructure. It is not “stop everything”; it is match safety to capability growth and build the commons .

  • With the scale of private investment, will labs actually participate?

  • There is a precedent . Netscape seeded the Mozilla public browser effort; the Mozilla Foundation stewarded open web standards that enabled Firefox, and the ecosystem prospered (Chrome, Safari, etc.).

  • OpenAI originally adopted a capped‑profit model under a nonprofit — a Mozilla‑like structure — though governance evolved with investment. The broader point stands: if defense‑dominant openness does not raise risk, policymakers can require interoperability and shared safety baselines . We already see that logic in browsers , and increasingly in social media .

  • Perfect—thank you. I’ll share the upcoming International Journal of Communication call for papers.

  • Wonderful — please send it along.

  • Thanks for the thoughtful conversation. Live long and prosper!