• Good evening everybody, and thank you very much for coming. Welcome to the Oxford Union.

  • I’m delighted to introduce our guest, Audrey Tang, Taiwan’s Minister of Digital Affairs and a leading advocate for open government and digital democracy.

  • By age eight, she was already teaching herself to code, and by 15, she’d founded her own IT company, serving as its chief technical officer. She later became the youngest and the first openly transgender and non-binary cabinet member in Taiwan’s history.

  • Tang has championed the integration of technology into government and radical transparency, for example, declaring broadband a human right and creating open-source mask maps during the pandemic.

  • Tonight, we’ll explore a wide range of topics, from how her early life shaped her views to the future of democracy in the digital age. But first, can we give Audrey a round of applause?

  • (applause)

  • You often say that in Taiwan, personal computing is inherently democratic because it allowed citizens to publish ideas without state permission. How did growing up in a newly democratizing Taiwan, where the lifting of martial law in 1987 coincided with the spread of the internet, influence your beliefs?

  • Yeah, I was born with a heart defect, and that was quite formative. When I was around four, in ‘85, the doctors diagnosed me and told my parents that this child had a 50% chance to survive until surgery. And so, every night after that felt like, I don’t know, a coin toss or something. If it didn’t land well, I wouldn’t wake up. I learned very quickly that I don’t have time for perfection, and so I got into this habit of “publish before I perish,” which means that I recorded whatever I learned that day—first into cassette tapes and floppy disks, and finally the internet.

  • I think this is inherently very democratic because if you just publish something perfect on the internet, people just press “like” and then scroll away. But because I published half-finished thoughts online, people really co-created, debated, and gave me very good feedback. In the earlier era in Taiwan, before we first had a presidential election in 1996, I participated as a young child in the earlier civic movements—for example, around the consumer cooperative movement, the consumer’s rights movements, and many other movements, including spiritual-related movements and charity movements. These were kind of the only ways the Taiwanese people could assemble and associate before they had the right to form political parties or newspapers, and so on.

  • By the time we actually got to vote for our president in ‘96, we already had a very strong civic muscle in the social or plural sector. I think the two main lessons I draw from this are: First, because we democratized after the popularization of web browsers and the internet, we see democracy itself as a kind of technology that can upgrade every other month or so by introducing new ways for higher bandwidth and lower latency. Many of them were prototyped in the co-op movement in the social sector. So that’s the first: It’s people pioneering new forms of democracy in the social sector before it got adopted by the public sector.

  • And the second thing was that because we all remember the martial law—anybody over 40 years old does remember that—whenever there’s any policy emergency or whatever that in other countries would result in censorship or takedowns, or really anything that encroaches on the freedom of speech, expression, and assembly, Taiwanese people collectively say that this is not part of the solution space. This is why we needed to overcome the infodemic—fraud, scams, whatever—with no takedowns and no censorship at all. This created a very interesting, challenging policy environment because entirely half of the policy space was unavailable, but that also forced us to co-create so that more speech, more context, and so on could actually depolarize the society without resorting to censorship.

  • You learned English only after German, French, and Swedish, and you joke that your first language is JavaScript.

  • Well, not quite. It’s probably Logo; it’s a form of Lisp. But anyway, yes.

  • When you look back on your childhood, being bullied both for your intelligence and for transitioning, how did those experiences teach you the value of sharing knowledge freely?

  • Yes, so as I mentioned, I published before I perished because I was never sure that I would wake up the next day. And, as Leonard Cohen put it, “There’s a crack in everything, and that’s how the light gets in.” I basically learned that vulnerability is an invitation for the light to come in. When I was eight, I believe, my mom wrote this weekly newspaper column about essentially my life story, so the entire bullying and things like that were a matter of national discussion. I learned very quickly that as soon as we generalize it, so that it’s not just about my vulnerability but also about everybody else suffering the same thing, it turns the personal into the structural. And then, instead of everybody “reading the air”—it’s a Japanese expression, like guessing the social context, what is appropriate to do—one can actually create common knowledge and “write the air,” so to speak.

  • And so, to turn what is vulnerable and what is suffering, empowering people closest to the pain so that the ideas they come up with become the new national norm. For example, in Taiwan, up to 10% of young children do not have to follow the curriculum; they can choose their own curriculum. They’re called the “experimental education” people. Instead of just homeschooling, they actually write up what they learned from this outside-of-curriculum learning, and every 10 years or so, we merge their learning back into our national curriculum. It’s like a research-development relationship between people who want to innovate and the people who turn that into production.

  • I think this interesting relationship of empowering people on the fringe, the vulnerable, and so on, also informed my politics, which is about letting people younger than 18—the next generation—lead the national direction by inviting them as reverse mentors, as cabinet-level advisors. It’s essentially the Pygmalion effect: we expect them to lead the country, and they grow up very quickly and do lead the country.

  • You famously aimed to “fork the government” by building parallel websites that presented official information more clearly.

  • That kind of activist hacking eventually led you to serving in government. Could you explain what “forking the government” means in practice and how citizen-led tech platforms change the way Taiwan’s government operates?

  • Yeah, so “fork” is a software term. It means that you take something that’s there, and instead of writing it off or destroying it, you keep all that is there, but then you steer it toward a different direction. In 2012, a bunch of my friends registered the domain name g0v.tw, which is an interesting play because all the official government websites are at G-O-V.tw. But by changing an “O” to a zero, you get into the “shadow government,” which works on the same kind of data but is always more interactive, faster, more fair, and more fun.

  • It created this notion of, instead of protesting against something, you need to demonstrate for something. So instead of saying, “This doesn’t work,” people are invited to make something that works better—which may or may not always work, but some of them did work.

  • For example, when people criticized the government for not rationing out masks well during the pandemic, they co-created their own mask visibility map system, which was merged swiftly in just 24 hours into the national system. The same goes for contact tracing. People were saying that the old contact tracing system was not privacy-preserving enough, and then the g0v people forked a new contact tracing system which was zero-knowledge. It got merged into the national system in just three days, which served us well into Omicron.

  • I can go on. I think a thousand or so projects from g0v during the past ten years really changed the norm, so that people always ask how you can fork the government so it can merge you back in. This creative, parallel sense of experimentation makes democracy and policy something like a semiconductor, where you can try different layouts and so on.

  • How would you address concerns that too much openness or anonymity (on open forums, for example) could undermine expert-driven decision-making or even national security?

  • Well, I mean, we’ve been doing “meronymity,” or partial real-name, partial anonymity, for quite a while now. On the national platform of participation, join.gov.tw, anybody can collect 5,000 signatures and force a national-level policy discussion, deliberation, or participatory budgets, and so on. But on those forums, if somebody claims they’re a resident of, say, Taipei City, they only reveal that they’re a resident of Taipei City, but nothing else. This idea of meronymity is very powerful because it enables us to authentically know that we’re not interacting with robots.

  • Or, if they claim they’re over 18 or under 18, they can produce a zero-knowledge proof that proves this fact as an attestation or credential, but they do not need to reveal their ID. There’s no way to dox that person, which leads to much better-quality conversations. It also made it possible for us to host online citizens’ assemblies so that we know those 10 people in the room are diverse, but they also have something connecting them, without again doxxing themselves. It enables the best of both worlds: We enable whistleblowing, epistemic injustice can be corrected this way, but you don’t get astroturfing or bots.

  • Let’s move on to a concrete example of your approach in practice. During the COVID-19 pandemic, Taiwan’s response was extremely successful. You crowdsourced mask availability maps and used a “fast, fair, fun” approach. Your team even used humor—a cartoon of the Premier—to diffuse a rumor about toilet paper.

  • As a result, public trust in the government soared from single digits to over 90%. Could you describe how the “humor over rumor” approach was born and how the collective citizen input worked?

  • Yeah, in 2020, the presidential approval rating was 71%, and the Central Epidemic Command Center (CECC) was over 90%. The way the CECC earned so much trust is because we gave people a lot of trust. Anybody could call the toll-free number 1922 and say whatever they want about the counter-pandemic efforts, and that allowed us to overcome all the conspiracy theories, polarization, and so on.

  • I’ll use a few examples. There was a 10-year-old boy who called 1922 saying, “You’re rationing masks, which is great, but all I got were pink masks, which is not so great because I’m a boy. I don’t want to wear pink; people will laugh at me.” Instead of the Ministry of Education just saying that’s bullying (which never works), at the next 2 PM daily CECC press conference, all the officers wore pink, regardless of gender. We also worked with online brands, and they all turned pink. Suddenly, the boy was the only one with the limited-edition mask that all the heroes wear. That really depolarized society.

  • Another example: Early in 2020, as we were rationing masks, there were two very dangerous strains of memes making the rounds. One said that because of our SARS experience, only N95 masks are useful; everything else is just placebo. Another strain said any kind of mask actually hurts you, and N95 hurts the most. If these two continued to polarize our society, our counter-epidemic effort would not work.

  • So instead, we worked with the “participation officers” (POs) embedded in each ministry and agency. They did a very quick, crowdsourced survey and we found the “uncommon ground”—the one thing that both sides could agree on. The very next day at 2 p.m., we released this meme where a Shiba Inu, a Doge-like dog, a very cute spokesdog for the CECC, put her paw to her mouth saying, “Wear a mask to remind each other to keep your dirty hands from your own face.”

  • So, if I want to wear a mask and you don’t, I’m just reminding you to wash your hands. Nobody could really push back against that idea, because the science wasn’t very clear which side was winning. We found this one thing that could bridge and translate across the two. Basically, if you look at a cute Shiba Inu and laugh about it, the depolarization works, and people become inoculated against the hate or outrage. And we did measure tap water usage; it increased massively.

  • Obviously, the COVID-19 pandemic was extremely serious, had worldwide impacts, and killed thousands. How do you defend using humor in that relatively serious context?

  • Yeah, well, because it turns out that humor travels faster than outrage online. During the pandemic, especially on social media, people live in a very high “PPM” (polarization per minute) environment, and the fog makes it almost impossible to agree on something. But humor just cuts right through it. People, once they laugh about it, enjoy a moment of clarity, of relatively low PPM.

  • But we never do humor for humor’s sake. All the humorous communications are linked with a bridging conversation. We know that people on both sides, however polarized, do agree with this bridging statement. The humor was more like a payload to make sure it travels faster than rumors.

  • You co-authored the book Plurality with economist Glen Weyl, describing a vision where democracy and technology reinforce each other. The book argues that today’s two big tech trends, AI and blockchain, are actually threats in different ways: AI by enabling authoritarianism and crypto empowering extreme libertarianism. Peter Thiel even said crypto is libertarianism and AI is communist. Do you agree with that frame?

  • Well, I mean if you mean communism as absolute centralized state control and redistribution, then yes, I think the current development of large transformer models, which cost a lot of electricity to train, is naturally gravitating toward this over-centralization of power.

  • I would also say that this is not the only way to train AI systems. In Taiwan, our national development fund, instead of funding one national champion that trains a huge foundational model, rewards more than 150 different smaller models, each knowing what they’re doing for a particular industry or culture. We have more than 20 national languages, including sign language and those of the indigenous nations. All of them can tune the models and train smaller models toward their particular way of working, like a code of conduct, inviting the AI systems to be assistive.

  • It’s putting AI in the loop of humans, instead of putting humans in the loop of AI, which feels like a hamster in a hamster wheel. It just keeps going faster and faster, but the hamster has no control, no steering over the hamster wheel. It is possible to develop AI systems that are symbiotic with local communities and that do not cost so much energy to train. But it does require actually knowing where you’re steering the AI models, which again points to democracy as a technology—broad listening and so on—so that society can very quickly converge and say, “We want AI systems in these roles, and please train AI systems for these roles only.”

  • There’s obviously big tech, which controls a lot of the growing AI field. How much of a threat is that to technology being able to be a force for good?

  • Well, first of all, not all AI companies are proprietary. There are AI companies built upon the idea of open access; for example, Mistral is famously doing that in France. Even OpenAI, which has “open” in its name, started doing open models this year. Just a week ago, we worked as part of ROOST (Robust Open Online Safety Tools) with OpenAI to release the Safeguard model. This is a model you can run on a high-end phone that looks at any policy—like, any community can bring their own policy about what content construes as culturally appropriate.

  • So maybe one community can say, “We care about climate justice,” and only uplift that sort of conversation. Another community can say, “We care about creation care,” and only have full biblical quotes, and so on. It’s not opinionated; you can just have your policy as a text file. The AI running on your own community’s hardware is more like a local steward than an all-seeing, all-knowing singleton.

  • It also enables translation so that those communities, when they’re actually talking about something, can enable this kind of cross-community communication and translation without relying on a single large model. I think we’re now gradually seeing the largest AI companies see that most communities will prefer their locally tunable steering model. The latest AI action plan from the White House is also saying that if the U.S. does not do this kind of standardization play for open models, then Beijing will probably seize that moment. So I think we now have a race to the top, instead of to the bottom, when it comes to centralization risks.

  • Critics might say goals, like having a thousand deeply engaged advocates reaching a billion people by 2030, are unrealistic or even delusional. How do you respond to that? How feasible is it to reach one billion people with these open forum platforms?

  • Well, I mean, these are just ways for people to make better meeting summaries. These are not some magical tech platform that takes eons and a state to set up. They’re literally just polling mechanisms, right? One insight is that people are more sociable when they’re around other people. If you poll people individually, they tend to be on the extremes—like “N95-only mask” or “ventilation, no mask ever.” Or like when we polled people in California directly after the wildfire about mitigation and rebuild: some people are YIMBY (Yes In My Backyard), some people are NIMBY (Never In My Backyard). But if you poll people in groups of ten, everybody becomes MIMBY: “Okay, maybe in my backyard, if you do this, if you do that,” and so on.

  • This kind of forum software is basically saying that it’s much easier to get generative policy ideas if you poll communities instead of individuals—if you aggregate the preferences of people when they’re around other people. And it’s just that. We’re now working with pollsters, such as the Napolitan Institute and Rasmussen in the U.S., to do this kind of group polling in all congressional districts. My hope is that in a few years, instead of saying it’s “deliberative polling” or “generative polling,” people will just say, “Oh, let’s have a poll.” It’s like how you don’t say “I’ll e-mail people” anymore; you just mail someone.

  • So I think the switch from the individual to the community is entirely feasible. I don’t think this is delusional at all. People are generally waking up to the fact that the only way to get past this polarization—this illusion of polarization—is to put people in the context of communities.

  • Recently you’ve traveled widely, helping California’s Governor Newsom build an “Engage CA” wildfire response platform and inspiring a new ‘Team Mirai’ party in Tokyo based on your principles. How transferable is Taiwan’s model to other countries? Are there, for example, any cultural differences that might make the transition difficult?

  • Well, I think the only thing one needs for this kind of broad listening is to have a committed listening partner that agrees to respond point-by-point to the results summarized by a language model, in any particular polity. In Bowling Green, Kentucky, they got local developers, museum curators, and so on to say that if the entire community—despite the apparent polarization in the U.S.—manages to agree on the future of Bowling Green in 2050, then they commit to a point-by-point response and will act based on the “uncommon ground” built by this technology.

  • I don’t think this is hard to get at a hyperlocal scale. Of course, it might be harder if you’re talking about hundreds of millions of people, but in a town of 10,000 people, it’s trivial.

  • In fact, in Finland, before they had the national conversation using this platform, many of their city councils had this online agenda-setting. They commit, every week when they’re having the city council meeting, to reading the consensus. That’s it. As a literally agenda-setting-only power, that is very achievable.

  • Do you find that, if you’re comparing on the same scale, America or Europe are more polarized than, say, Taiwan?

  • Well, I don’t think people are actually polarized. I think we have a high “polarization per minute” (PPM) context, thanks to social media and the 24/7 media cycle. To me, this is more about whether people feel they can still go to some civic space where there’s a low PPM. It’s a function of the space, not of the people. All over the world, as soon as people can find some sort of pro-social media to engage in and something common that binds them together—it could be sport, spirituality, or a civic movement—suddenly people feel they’re not polarized anymore.

  • One big part is the way people in Taiwan use social media. For example, we’re probably one of the only jurisdictions where there’s not a lot of growth for TikTok or other recommendation engines that amplify outrage and division. In fact, I think we’re the largest segment on the Fediverse through the Threads platform. This means you can post on threads.net but you don’t have to reply on threads.net; you can consume that content anywhere. Basically, you choose your own garden instead of a walled garden. The Taiwanese people are the largest segment by daily active users on Threads.net compared to any other country, even though we’re just 23 million people.

  • Because of that, it’s just like number portability. If you switch from one telecom to another and cannot keep your number, there’s a lot of misaligned incentive for the existing large player to squeeze you and not offer good service. But as soon as you have portability, you can take your community and your business with you when you switch to a different social network. Then, all the incentives are on the social network to enable this kind of bridging and low-PPM conversation, because if they don’t serve you well, you just take your community elsewhere.

  • Taiwan sits next to a huge authoritarian neighbor. How do you navigate technology diplomacy when China and Russia are also trying to advance their own tech visions?

  • Well, around ten years ago, both sides of the Taiwan Strait looked at the same phenomena: recommendation engines, parasitic AI that drives engagement through enragement. But we came to very different conclusions. Taiwan, as I mentioned, focused on making the state transparent to the people so we can build common knowledge from this “uncommon ground” to build digital democracy and broad listening. The Beijing regime instead spent a lot of money on what’s called “harmonization” (wéiwěn, or “stability maintenance”) and gradually decimated journalistic freedom, freedom of expression, and association online. They decimated these civic muscles so that they became very good, I guess, at making the citizen transparent to the state, instead of the state transparent to the citizen.

  • We were on two tracks that were directly opposite. There’s no winner, I guess; you win by default once you turn in that direction. The main challenge Taiwan faced during the past decade—in addition to the polarization attacks from Beijing (for the past 12 years, Taiwan was top of the world on the receiving end of those attacks)—was focusing on countering the narrative that democracy only leads to chaos and never delivers.

  • Every time we face something new—the pandemic, the infodemic, deepfakes, fraud, whatever—we need to figure out a way that does not backslide on our internet freedom and democracy, but still overcomes that challenge. Every time we do that, people share a peak experience and become even more resilient against the Beijing narratives.

  • You spoke about introducing the counter-narrative of democracy delivering. What is Taiwan’s stance on content moderation versus free speech more generally, particularly in the face of foreign interference?

  • Yeah, we believe in actor- and behavior-level liabilities instead of content-level takedowns. We don’t do administrative takedowns; there’s just no censorship. We’re top of Asia, I believe, and maybe one of the top in the world when it comes to internet freedom. But our constitution never protected foreign robots, you know, from cross-pollinating and reaching virality in Taiwan. So there’s a very strong sense, as I mentioned, of meronymity, that you have to at least prove you’re not a robot.

  • In fact, last year when we faced a new wave of deepfake scams in March, we sent 200,000 text messages to random numbers around Taiwan asking one simple question: “How do you feel about this new challenge to information integrity? What should we do about it?” People gave us their ideas, thousands volunteered for an online citizens’ assembly, and we chose 447 people statistically representative of the Taiwanese population (gender, occupation, place they live, etc.).

  • In groups of ten, they deliberated. For example, one room said, “Let’s make sure all ads on social media are displayed as ‘probably scam’ unless digitally signed by someone using a digital signature KYC (Know Your Customer) we can trust.” Another room said, “If somebody loses seven million dollars to an investment scam that’s not digitally signed, let’s hold Facebook or other social media liable for the full seven million-dollar damage.” Another room said, “If TikTok does not settle in an office and ignores our liability rules, let’s slow down the connection to their videos so their business goes elsewhere, without fully censoring them.”

  • After five hours of deliberation, a language model wove together the 45 resonating ideas from those 45 rooms into a core package. We voted on it and showed the parliamentarians that, no matter their ideological position, this core package left more than 85% of people happy and nobody very unhappy. That was passed in record time. That was in March; by July, all the law had passed. This year, if you scroll, you just don’t see deepfake ads anymore. But we did not do any content-level censorship. It’s entirely, as I mentioned, actor-level and behavior-level policy.

  • To change the topic slightly, you’ve suggested Mandarin is less gender-specific, helping with your transition. Yet many Western countries are caught in a so-called “cultural war” over identity, which you criticize as “having a lack of chill.” Do you advise global leaders to be more relaxed about social issues?

  • Well, my pronouns are “whatever,” so this also works in English. I think it’s interesting that many people see online conversations as so polarized because they get caught up on the specific ways words are used. I used the example earlier of “climate justice” versus “creation care.” If you actually look at these two communities, their concrete policy proposals are probably the same. But just because they’re using different words, it becomes very difficult to talk ostensibly with one another.

  • This is why I believe this kind of bridge-making algorithm and bridge-making dictionary is so important. We’re working with what’s called the Green Earth project, which prototypes on Bluesky, but also BitChute, I believe, and hopefully Truth Social, so that each community can have its own conversations, but the tech also does “community notes” to pass notes from one social network to another with appropriate bridging translations.

  • Each community can talk in whatever words they find comfortable, but they can also reach “uncommon ground” with other communities using different epistemic standards. I think the pronoun “whatever” is one example of this generalized principle: if you work with the community—tech for a community—instead of making some tech progress that wants to sacrifice another community’s epistemic norms, you actually get further than just one single, progressive idea of technology.

  • You’re widely recognized as a trans icon and the world’s first non-binary minister. How do you handle the media attention on your gender identity?

  • Well, like anything, right? I “hug the trolls.” There are a lot of trolls on the internet, and many of them actually have legitimate grievances. Many of them just make personal attacks, 1,000-word rants, or whatever. I’ve published all my meetings, journalistic visits, and so on for the past ten years. There are more than 2,000 of those transcripts with 8,000 or more people; they are all in the public domain.

  • All the language models, small and large, are pre-trained on my thoughts, basically, and on my A-game. Whenever I receive a trollish attack, my language model helps find the five words within that 1,000-word rant that can be construed as constructive, and then I engage them meaningfully based on these five words.

  • If they have a lot of grievances and gripes, chances are they’re actually suffering somehow from existing policy and just didn’t have the right words for it, or they’re systemically ignored by the political apparatus. So they just use gender or whatever as a way to get my attention. But I make sure that I only respond to the constructive part. For the other parts that are not constructive, I just receive them like poetry. Like, “Oh, language can be used this way. It’s very interesting.”

  • Of course, this is emotional labor if you have to do it by yourself. But because of this exoskeletal language model that I wield, I manage to hug trolls on a kind of industrial scale.

  • Does it distract you? And more importantly, do you think it’s a distraction from your mission at all?

  • Not at all. Part of the point of working at a Lagrange point, between governments on one side and movements on the other, is not to be pulled into one particular orbit. So I don’t think this is distracting; I think this is actually attracting .

  • That is to say, it attracts the kind of people who otherwise would not engage in policy conversation because they don’t have the vocabulary for it. Instead, they just post some rants and automatically get an invitation into this online conversation. It turns out that this language model speaks their local language and is genuinely curious about what actual grievances and lived experiences they can contribute. So I think it attracts the kind of people, especially very young people, who otherwise would not have the policy vocabulary, into co-creation.

  • You once described yourself as a “conservative anarchist,” valuing voluntary cooperation while respecting institutions. Can you explain that apparent paradox?

  • It’s not really a paradox. The anarchism, to me, just means I don’t believe in coercion. I don’t believe in top-down, takedown, shutdown, lockdown-style policy levers. I believe only in voluntary cooperation. But what the voluntary cooperation is doing is not sacrificing the existing community’s norms in the service of so-called progress. Rather, it is building the kind of tools that let the existing community understand that “we the people” are already the superintelligence we’ve been waiting for. We just use assistive intelligence to put AI in the loop of humans so that the connective tissue—the relational health within a community—works better.

  • This is fundamentally conservative, right? It’s about preserving the tradition, the lineage, what matters, what is meaning-making to a certain community. Then we add to that the ability to translate across epistemic norms, across communities, so the wider society becomes more plural. It’s conservative, but it’s not stopping new norms from emerging. It’s more like a symbiogenetic way of thinking about community, so that people join into larger conversations without sacrificing their autonomy in the local community.

  • One can be culturally and traditionally conservative without top-down coercion. This is something Elinor Ostrom worked out long ago, and this is just an implementation of some of her ideas.

  • After years in government, you’ve stepped down to focus on spreading your ideas worldwide. What’s next?

  • Well, literally what’s next is a seminar in the Schwarzman Center of Humanities a couple of days from now, on training machines in the ethics of care. I believe this is very promising because if you just train machines to maximize some number in the utilitarian sense and try to enslave and control them using some deontic rule, it really doesn’t work. Training machines to care about the relational health of communities has a real potential to shape how frontier labs think about their agentic AI systems.

  • So that’s literally what’s next. I’m really thankful for the Ethics and AI Institute here in Oxford for giving me this senior fellowship. I’m actually an “Accelerator Fellow,” which I would nuance by saying I’m accelerating steerability instead of accelerating blind maximization.

  • After that, I’m going to Barcelona for Mozfest and then to Berlin for the Freedom Forum.

  • More generally, what do you see yourself doing in a few years?

  • Well, I think at this point, I’m just a pattern of thinking, right? It’s not Audrey Tang, this organic embodiment, but rather “Audrey Tang,” this way of prompting in the AI model and getting this exoskeletal way of collaborating across differences.

  • What I’m continuing to do is essentially what I did since I was four years old: publish my half-baked thoughts online into the public domain so that human and AI systems can let the light in through the cracks. Instead of claiming I’m a perfect ancestor solving some worldwide problem, I aim only to be a good enough ancestor, to leave the next generation more material, more canvas to work on compared to the day I logged into this world.

  • We’ve spoken about the success of the Taiwan model. What single principle would you like other world leaders to learn from?

  • Sure. I think if there’s only one thing people can learn, it is that democracy can be fast, fair, and fun. You don’t need to make a trade-off. There are a lot of ways now, thanks to digital technology, that you do not have to just make democracy exciting for a narrow slice of people. Neither do you need to make a broad statement but only have five words to work with. It is now eminently possible to poll people in groups and get the common ground—the “uncommon ground”—across different communities in a way that does not lose the nuance.

  • Let me close by asking: Given everything you’ve seen and done, both in Taiwan and across the world more generally, are you more hopeful or more alarmed by the future of democracy?

  • Well, I think democracy as a technology is really having a moment now, because people are collectively tired of polarization. No matter which side of the political aisle I talk to, when I visit the U.S.—the most advanced state when it comes to political polarization—both sides are collectively tired of it: of the 10% on the extreme left and 10% on the extreme right dominating the conversation and everybody else being caught in the crossfire.

  • I think now people really do see that if we upgrade democracy for a higher bandwidth and lower latency, if we can make democracy work in the here and now instead of as this ritual we practice every four years, then we have a good chance of depolarizing society and making democracy prosperous and fun again.

  • I’m feeling a sense of optimism because after last year—which I think had the highest number of people going to the polls ever—all the ruling parties lost seats. I don’t think there’s a ruling party that gained any seats. And they lost the seats not to the traditional opposition party, but rather to the more polarized, more extreme part of political ideology.

  • What Taiwan learned ten years ago, when we were literally at the frontline of political polarization and anti-social media, suddenly everyone is waking up to. We cannot really live in this high-PPM moment any longer. Conversely, there’s a lot of renewed interest and investment in democracy as a technology.

  • I think we’ve got time for maybe two questions from the audience. If we start here?

  • Even after having so much progress, I’m an AI student, and I want to ask: what is the “national stack” a country should have when it comes to leveraging AI without depending on foreign countries or foreign policies? How can we, as a local country, work on that?

  • Sure. So just to clarify, you mean a country that wants to build the local AI stack without depending on any foreign power, and you’re asking what the initial things to work on are, essentially?

  • Okay, that’s a great question. In Taiwan, we focused mostly on, as I mentioned, the linguistic work across the public sector: summarization, topicalization, translation across our national languages, and so on. The thing is, if you know what you’re doing, you do not actually need huge models for any of these particular tasks. You don’t need a super language model that has memorized the Studio Ghibli repertoire just to translate between two national languages.

  • The really the first thing is to build such language models in the commons and make them widely available. Also, shape your policy environment such that if any part of the technical stack goes into public procurement, the same vendor cannot win the contract for the adjacent part of the stack. This is what we do in Taiwan.

  • The Ministry of Digital Affairs is uniquely in the Parliament’s transportation committee, instead of in science, technology, economy, or interior (for cybersecurity) as in other countries. This is because, in the transportation committee, people understand the idea that if you build an information superhighway, you must have some off-ramps. The entire capex investment cannot just go to one vendor, because if that happens and you suffer from a CrowdStrike incident or something, everything is lost.

  • By giving public contracts for adjacent parts of the stack to different vendors, it forces the system to speak interoperable protocols. Then you can have civic AI, public AI, and so on, working with some foreign suppliers, but they can never lock you in. This is something I think the India Stack has been doing quite well on payment and identity.

  • I would also say that most people in Taiwan would not trust the government to run all this infrastructure. What we usually do is work with people like the g0v movement, so they also become a civic infrastructure. If you don’t trust the government’s wallet system for verifiable credentials, you can run the entire same thing in public code from your school or your church, and they still stay interoperable. The idea of digital civic infrastructure is also very important.

  • And for one final question.

  • Some panelists have asserted that Taiwan has one of the highest CCTV camera-per-person ratios. I just want to hear your opinion on how you would characterize the surveillance ecosystem in Taiwan and how you ensure the government has that surveillance capability without compromising democratic values.

  • Yeah, that’s a great question. I think in Taiwan, there’s a lot of sousveillance going on. That is to say, people take live footage of the government and public sector doing things wrong in real time, too. It’s not just, as in some other countries, the state apparatus making the citizens transparent to the state; rather, at any given time, you have a lot of civic journalists.

  • In fact, that’s part of our basic education. If you’re a primary schooler in Taiwan, you learn about curiosity, collaboration, and civic care by participating in journalism, using sensors to measure air quality and water quality and hold presidential candidates to account as they’re having a debate in real time.

  • Part of the reason why is that it is not the individual checked facts from journalism or from the state that inoculates young minds against polarization. Rather, it is the act of going through civic journalism, reporting, and sousveillance as a class with their peers that inoculates the young against polarization and outrage. There’s a lot of this multi-dimensional accountability going on in Taiwan, which may look very strange from a very individualistic Western perspective, but it also means that radical transparency is much easier to practice as a norm in the political sector.

  • And of course, we have, as I mentioned, meronymity and privacy-preserving architecture, so that when data is collected and processed on the edge or on-chip, you do not actually dox or re-identify people when you publish it into the commons to do sense-making. You just publish the differentially private aggregation instead of the raw data. We have the national infrastructure for privacy-preserving data altruism projects based on that principle.

  • Please, could you all join me in thanking Audrey for sharing some of their half-baked ideas? Thank you.

  • (applause)