• Dear colleagues, dear students, dear guests, dear Audrey Tang. As the Dean of the Faculty of Arts and Social Sciences of the University of Zurich that hosts the Right Livelihood Center at our university, I am happy to welcome you to this very special evening at our university. We are privileged to have with us today Audrey Tang, the winner of the 2025 Right Livelihood Award, also referred to as the Alternative Nobel Prize, for achievements relating to practical solutions to global problems. We are honored and excited that she has accepted to be the speaker of our Right Livelihood Lecture today.

  • Audrey Tang’s vita is overwhelming and we will hear about it later. Let me say only one thing in this respect: While she developed her extraordinary innovative capacity largely outside the formal system of education, starting her own firm at the age of 19 and becoming a minister at the age of 35—on top of that for a totally new portfolio whose relevance she previously demonstrated to the existing government—her successful approach to combine scientific excellence with the art of politics and a deep civic motivation for people’s empowerment and a strengthening of democracy is exemplary for what we would also love our students to take away from their studies at our university.

  • Of course, we do not expect every student now to graduate at the age of 14 like Audrey Tang, starting her own firm at the age of 19 and becoming a founding minister at the age of 41.

  • Now Audrey Tang works at the Faculty of Philosophy at Oxford—how fitting, I am also representing a Faculty of Philosophy—in the field of Ethics in AI. And at our university, and also within our faculty, many of us with different perspectives—for instance from history, from computing, from arts, from political science, from languages or communication and media sciences, and many more, and also ethics of course—are highly interested in what we are going to hear from you today. And I guess you see that in how full the room is, and we’ve just discussed that many others would have liked to come and unfortunately couldn’t fit in anymore.

  • I think here at the university, we also have a number of people, researchers and students alike, who are struggling to find new solutions, making technology work in favor of rather than against democracy, and more broadly in favor of rather than against humankind. And we hope that some of this engagement and some of the aspiration to improve the world—be it but by little steps—will not just remain at the level of researchers and professors but also trickle down through the studies to our students. For them, as well as for our research community, Audrey Tang can be a great inspiration. Seeing how she succeeded raises hopes that such things can also work elsewhere and in the future. Audrey Tang, we thank you very much for honoring us today with your lecture. And thank you also to the Right Livelihood Center at our university for making this possible. And of course, as well for the ongoing activities such as the lecture series Sustainability Now that help us not to overlook the broader social perspectives, if not to say the broader purpose, of all our work and studies.

  • Vielen Dank, Katharina, für die schönen Worte. Wir werden jetzt einen kurzen Film über Audrey Tang bzw. fünf Preisträgerinnen sehen und dann die Lecture von Audrey Tang hören. Nach dem Vortrag folgt ein Gespräch mit Adrienne Fichter, mehrfach ausgezeichnete Tech-Journalistin der Republik, die ich Ihnen noch vorstellen werde. Und nach dem Gespräch wird Ole von Uexküll den Abend hier in der Aula beenden und Sie alle zum Apéro im Lichthof einladen.

  • Before I hand over to the screening and the lecture, I would like to say a few words about Audrey Tang as well. I’m delighted that you are here with us this evening because not only is your work extremely important, but you are also a very positive and humorous person. “Humor over rumor” is one of your mottos. In an interview you said, “Think of something that is funny, spreads quickly and takes the oxygen out of polarization.” And I think in a world characterized by general fear, doomsday scenarios, and dystopias, humor has taken on enormous political significance. This also applies to the field of new technologies, which are often perceived only as threatening. I am now very much looking forward to your presentation and will give you the floor after the film without further ado. Thank you.

  • Courage is choosing to fight even when all the odds are against you.

  • It is found in a willingness to take on cases that you know you might lose, simply because they need to be fought.

  • It requires the strength to speak truth to power and the resolve to turn fear into action—a resilience that the people of Myanmar embody daily.

  • True courage involves the ability to exercise civic care even in adversarial situations where isolation seems to be the norm.

  • It is the determination to never give up, choosing to stay with your community even when it would be easier to run away.

  • We envision a world that is safe, where our children get to experience the same innocent childhood that we did and live on the ancestral lands of their choosing.

  • We strive for a world that is more verdant, just, and joyful. This includes a peaceful, federal, democratic Myanmar free from tyranny and the grip of the military cartel, and a Sudan where the war finally stops.

  • We look toward a future where technology leaves no one behind and is shaped by everyone affected by it.

  • A world where solidarity is always at the forefront.

  • Good local time, Zurich. It is a great honor to be speaking with you as the 2025 Right Livelihood Laureate. First, let me reflect on what brings us together today: Right Livelihood. It’s not a list of permitted professions or some spouted moral rule. It is a dynamic practice, a way of engaging with the world, as we have seen in the short clip, that makes us, working in the field, bound by curiosity, by collaboration, and civic care.

  • His Holiness the Dalai Lama wrote, and I quote: “In order to carry out a positive action, we must develop here a positive vision. It is under the greatest adversity that there exists the greatest potential for doing good, both for oneself and the others.” Now, we’re arriving at an age and time of intense pressure. We have seen the climate crisis, social polarization, and democracy backsliding around the globe. So you can feel, really, that the adversity, the darkness, is closing in.

  • But as the Dalai Lama teaches us, Right Livelihood thrives on adversity, offering hope in the form of light. Light that shines through the cracks. Light that illuminates opportunities for real change and lasting change.

  • And this brings me to a personal story about a crack.

  • When I was five years old, doctors told me and my parents that this child only had a 50/50 chance to survive until surgery, which I got when I was 12. But for the first few years of my life, every night before lights out felt like a coin toss.

  • This instilled in me an urgency, what I call “publish before I perish.” Essentially, I recorded everything that I learned during the day. First on cassette tapes—many of you don’t know what that is anymore—floppy disks—okay, a few more—and then finally the Internet. The Internet is still around. And along the way, I discovered something profound. If you publish something perfect on the Internet, people just click “like” and then they scroll away. But if you publish something imperfect on the Internet, everybody comes out and argues with you.

  • But it’s actually a blessing. I made my friends this way. Something rich in vulnerabilities, in half-formed thoughts… Those cracks are invitations for participation. So people correct me, of course, but they also engage and co-create. As the late, great Leonard Cohen wrote: “Ring the bells that still can ring. Forget your perfect offering. There is a crack, a crack in everything. And that’s how the light gets in.”

  • For democracies, this really is key. We must see the cracks in our societies—broken trust, polarization, environmental cracks—not as reasons for despair, but as invitations for collaboration, those openings for light. We must treat these fractures not with cynicism, but with salves of goodness. This is the treatment prescribed by Right Livelihood.

  • So, let me talk about “Max OS.” This is one of the cracks in our world that is felt very much here. The dominant logic of our digital age is that of utilitarianism, consequentialism, optimization, maximization. Find a number, any number, and make the number the biggest number in the world.

  • For example, on social media, we see algorithms trying to maximize the time we spend on touchscreens. When this MaxOS is applied to the realm of human relationships, it leads to catastrophic results. Around 10 years ago, many social media platforms switched from simple feeds—where we saw the content of the people we commonly follow and then we saw the same world—into what’s called a parasitic recommendation engine. It feeds on our divisions, our differences, and then maximizes engagement through enragement.

  • This outrage brought a very high PPM environment. I don’t mean parts per million CO₂. As you see, I mean Polarization Per Minute.

  • Because if you see content online that builds relationships, we can reflect on it, we go offline, we have a real conversation, a flesh and blood discussion. So we won’t stay glued to the screen. But if we see something that attacks our values, fuels division, amplifies extremes, then it sparks outrage, and then we’re hooked, addicted to the screen and ready to fight.

  • That environment of very high PPM makes authoritarianism very easy to exploit our differences. It used to be that we could take collective action against tyranny, against authoritarianism. But in a very high PPM environment, we are fractured into hundreds of small differences, making collective action very, very difficult. And that is the opposite of Right Livelihood. We may call it “Wrong Livelihood.” It strips people of agency. It erodes our attentiveness, our relationality, our embodied life. It widens the crack, not to let any light in, but deepen division.

  • How to fix this kind of malign, parasitic AI systems? Right now, many people talk about “human in the loop.” But to me, human in the loop of AI has the feeling of a hamster in a wheel. The hamster just keeps running faster, exercising. The illusion of engagement. But actually, the hamster has zero control over where the hamster wheel is headed. And in fact, it’s not heading anywhere.

  • This human-in-the-loop-of-AI, I think, poses the critical question: It’s not about whether we should accelerate AI, or we should pause AI, control AI, stop AI. The critical question is: how should we steer AI? I believe we must take control of this steering wheel and move from “human in the loop of AI” into “AI in the loop of humanity.”

  • In a nutshell, it means switching away from using AI as addictive intelligence to give us doom scrolling, and start using it as assistive intelligence designed to help us listen to one another. And this is Right Livelihood for the technological age. Engaging responsibly with the tech ecosystem, discerning how our work shapes the future minds, cultures, and institutions. And ensure technology supports, not erodes, human dignity.

  • Now in Taiwan, we have a very unique array of environmental, geopolitical, and social pressures to make us very good listeners and overcoming polarization because we enjoyed, for the past 12 years, the top in the world polarization attacks from foreign interference. We enjoy 2 million cyber attack attempts every day. Here you have to pay for penetration testing. In Taiwan, we get that for free.

  • We adapted, and then we evolved into what I call a “geothermal” way of facing conflicts. As some of you know, Taiwan is the youngest tectonic island in the world, only 4 million years old. And we see the conflicts not as a volcano from which to retreat, but rather listening deeply to the earth in the social fabric sense, we can convert the heat into powerful co-creative energy for renewal. And so, the anti-social corner of social media can be built into pro-social architecture. And so instead of a destructive fire to be extinguished, we can transform the plate-against-plate collision so that Taiwan rises ever skyward and starward.

  • The main algorithm here is called “Uncommon Ground.” It goes like this: 10 years ago, as an example, ridesharing services like Uber came to Taiwan, as it did elsewhere in the world. And it triggered intense conflict between taxi drivers and Uber drivers, the unions, the sharing economy endorsers, the protesters against extractive gig economy—the full spectrum. So the online debate was very toxic because you see people arguing and then you see a quote-tweet or quote-dunk and then people start attacking each other, and another dunk, and then people become very, very polarized.

  • Instead of getting people to lose their minds on anti-social media, we built a pro-social media. And the technology is called Pol.is. It’s a digital space designed to listen and see each other’s feelings. Crucially, there is no reply button, and no retweet button either. So there is no room for trolls to grow. There’s no way to dunk on someone just to get some points. As you participate, you see one feeling from a fellow citizen. For example, they feel that undercutting meters is not fair, but making the surge pricing higher is quite fair. Maybe you agree, maybe you disagree, and you press like or unlike or pass. As you do so, you see your avatar moving toward a cluster of people who feel similarly to you. But then the other feature: We have a scoreboard that shows the leaders who can propose the bridging statements that make the two clusters of people who otherwise never agree start agreeing. So people start competing to build longer and longer bridges.

  • So, not the left wing, not the right wing—the “up wing” began to form after three weeks of online conversation. Then we mapped where people do agree, and turned out we do agree on most of the things. The polarization was just an illusion. The real issue was just the high PPM made it very difficult for us to see through. So we transformed that very intense conflict into a coherent set of laws that the majority can live with. And this Uncommon Ground system is now in wide use globally. If you have used Community Notes on X.com, on YouTube, on Facebook, and so on—this is the same idea, the bridging instinct.

  • We can also use this to solve issues that are more recent and even more insidious. Another example: Last year, Taiwan faced a wave of AI-powered deepfake scams. You would open Facebook or YouTube social media and see advertisements featuring the famous CEO, like Jensen Huang of Nvidia, offering free investment advice or free cryptocurrency. You click and it looks just like him, and he talks to you, it sounds just like him. It’s most definitely not him. It’s a deepfake clone of him running on an Nvidia GPU. And so we need a solution to that. But being the most free country in Asia when it comes to internet freedom also means if you ask people, they say “we cannot censor speech.” What to do?

  • So we turned to the people. We sent 200,000 text messages to random numbers around Taiwan asking only: How can we solve this issue together? And they gave us many good ideas. And thousands volunteered. We chose a statistically representative microcosm representing the population. And the mini-public met online in an Alignment Assembly using AI as assistive intelligence. In rooms of ten, each person looking at nine other people—45 rooms—the ideas start getting weaved by small language models. A process usually would take human facilitators a week. They came up with brilliant, brilliant solutions.

  • For example, one room says: Look at each advertisement, and on social media by default, display a prominent label that says “Probably Scam” to every advertisement. Like cigarette labels. And then if somebody like Jensen Huang digitally signs off on it, becomes accountable to it, then you take that label off. Very good idea.

  • Another room says: If a social media push an unsolicited investment-scam advertisement to me, I didn’t subscribe, and then I lose 7 million dollars to that unsigned advertisement, then the social media needs to be liable for the full 7 million dollar damage because after all I did not subscribe to it. Another very good idea.

  • And another room says: There is a certain foreign… uh, I will state the name, TikTok… at the time it did not have a legal office in Taiwan. And so if we assign liability, they could just ignore us. What to do? And this room says: For every day they ignore us, slow down connection to their video by 1%. Throttling, not censoring. And it’s just to make sure that they pay attention to our legal system.

  • Within a day, the same day, a small language model wove all those ideas together into a coherent bundle. And then we put it to a vote. At the end of the day, more than 85% of people—the microcosm—voted yes to this very good package. And the other 15% said at least we can live with it, we consider this legitimate. And so we did not just read the air or try to guess what the social norms are around. We wrote the air together, ensuring everybody shared a common knowledge that everybody else, all their neighbors, are all okay with this bundle of proportionate response.

  • That was last March. And by May, we pushed out the two laws drafts. By July, it’s all passed. And so throughout this year, it’s been an entire year, there’s just no deepfake scams anymore on social media advertisements. And so this is the geothermal engine in action. It demonstrates that we can use AI as assistive intelligence so that the collective intelligence can draw the red lines around AI as addictive intelligence.

  • for this vision to thrive, the system that I just described requires to be open and also locally governed. We cannot build those societal bridges on closed, proprietary, colonizing infrastructure. So we must resist digital colonialism. And this requires what I call “Freedom Architecture.”

  • The current digital landscape, as many of you know, does not offer freedom of movement. We’re like drivers on the Information Superhighway of social media with no off-ramp and on-ramp. So, think about it. A highway with no off and on-ramp is a hamster wheel. Once you get on, there’s no way to get off. Otherwise, you lose your relationships, your connections, your communities. So we need protocols of off-ramp and on-ramp for freedom of movement. Think about how you keep your phone number when you switch from one telecom provider to the other. This is called number portability. We need the same for our digital lives with social media, with AI, with all the platforms.

  • Now policy is catching up. Europe is leading the way. The Data Act gives us data portability. The Digital Markets Act promotes interoperability for instant messaging. And so this spirit enables us to exit a platform with some of our data. But when the exit is real and includes like the social graph, then the platform must compete on care, not on capture. So I encourage those of you who work in engineering, in law, to think in terms of fundamental freedoms of movement, of association, and all the human rights on the digital space.

  • This process of rejuvenation in Taiwan begins because we in 2019 changed our curriculum. All of those ideas came from the citizenry, and many of them started in schools even before they turned 18. We see this process as like working out in a civic gym. Where the gyms are schools, literally gymnasiums. Since 2019, we changed our basic education curriculum. We switched from what’s called “literacy,” which is about taking information, critical thinking, into “competency,” which is about producing information and knowledge together. Because the top-down journalistic fact-checking does not really inoculate people against a high PPM. But the ability to do peer-to-peer collaboration on fact-checking, verifying sources, balancing coverage, producing contextualized narratives, making real-time corrections during presidential candidate debates—our middle schoolers, once they do that, they cut through the high PPM environment.

  • According to the International Civic and Citizenship Education Study, by 2022, Taiwan’s 14-year-olds ranked at the world’s top when it comes to civic knowledge. The civic muscle, the capacity for public engagement. And so this means that the democracy of Taiwan is in very good hands because our 14-year-olds already feel their hands are on the steering wheel. And when we make sure that the elderly people listen to the lessons of the civic gym—not from the government but from their grandchildren—they actually strengthen their civic muscle too. The most active segments in our public participation platform are the 17-year-olds and the 70-year-olds. The intergenerational unity is just awesome. And granted they have more time on their hands, but also they care more about the future.

  • There is now a temptation in the research community to automate some of that Alignment Assembly civic gym. Specifically there is software called the “Habermas Machine”—it’s real software—that can interview people individually and build a digital twin of these people. And then in silico , that means in simulation, these digital twins start deliberating in a discursive democracy. It’s very good. It produced extremely good policies. But if we just rely on delegating our deliberation to the robots, it’s exactly like sending our robots to the gym to lift weights for us. I’m sure they’re very impressive. They can lift a lot of weight. But what is the point?

  • We cannot delegate democracy. We need to be part of the civic gym. We need to ensure that our schools, the civic gyms, continue to train our civic muscles. We need to ensure that we research and develop the tools for digital democracy and develop the ethical frameworks that guide our technology. We must move beyond MaxOS, toward the Ethic of Care.

  • In Oxford, I’m developing the “6-Pack of Care.” You know, muscle. And also portable. So this is called the alignment to a process of relationship — relational alignment. So the process of care, as the ethics of care says, should maximize not engagement but the relational health, mutual trust, and inclusion.

  • Here we return to Right Livelihood, not as a virtue for individuals, but something that cultivated societies can see as the civic virtue of the entire community. This is what I call “Techno-communitarianism.” Technology to foster the ability for communities to integrate their own interpretation of ethics and contemplative practice, through policy that protect human and ecological well-being. And this is aligned with many wisdom traditions, those institutions that nourish compassion and engagement.

  • I think the future is rooted in this attentiveness, this care, this presence, where Right Livelihood emerges from a symbiogenesis of communities with other communities. We can build the local AI systems that take the local community norm, for example one around climate justice for specific communities. Another community can train an AI system to attend to the biblical creation care. And we have worked to build social translation systems so that the machine can talk to this group of people and show that that group of people is also doing God’s work. This is an amplification for the broad listening technique that has shown its practice and effectiveness in Taiwan.

  • Now, many people are looking for superintelligence to be built in some lab somewhere. But I think the future of intelligence is not a machine waiting to be invented. It is us. Our augmented collective intelligence. It is our capacity to coordinate with care. We, the people, are the true superintelligence.

  • Remember: Positive vision brings positive action. “In the greatest adversity lies the greatest potential for doing good, both for oneself and the others.” The adversity is here. So is our capacity to care. AI must not divide us with outrage. It must connect us through Uncommon Ground.

  • In closing, I offer a poem, a prayer, a vision for us working together. When I first became Taiwan’s Digital Minister in 2016, I noted that in Taiwanese Mandarin, shùwèi (數位) means both “digital” and “plural.” So I wrote my own job description that remains my mantra to this day. It goes like this:

  • When we see “Internet of Things,” let’s make it an Internet of Beings. When we see “Virtual Reality,” let’s make it a shared reality. When we see “Machine Learning,” let’s make it collaborative learning. When we see “User Experience,” let’s make it about human experience. And whenever we hear that a Singularity is near, let us always remember: the Plurality is here.

  • Thank you so much. Thank you.

  • (Standing ovation)

  • After a fantastic speech by Audrey, we are all now looking forward to learning more about Audrey’s mind-changing work. I would like now to give the floor to Adrienne Fichter from the Republik. Even so, I don’t need to introduce Adrienne to most of you. I would like to emphasize that the political scientist was honored as Swiss investigative journalist of the year 2020, 2021, and 2024. She is also the co-founder and community manager of the web startup Politnetz, which also won several awards. In 2020 she published the book Das Netz ist politisch (The Internet is Political), Part One. We cannot imagine anyone better suited to conduct the interview with Audrey. Welcome Adrienne and thank you so much for being here tonight.

  • Now? Yes. Thank you so much for this introduction. For standing here in the front, not as a student here in the row. Also as a student from Professor Krüger… Well. Yeah, there are so many aspects you said in your… admirable speech. I will discuss, like, AI regulation, polarization attacks, cyber attacks. And actually, if I would ask them all, we probably need 60 minutes, which would end up in a long-read piece for Republik , one of the companies, but we will focus and let’s say, let’s see where we get.

  • But let’s just start with also a personal note about the first time I heard about Taiwan and the two words “Pol.is.” That was in 2017. Was really one of my defining, eye-opening moments in my career as tech journalist when I heard about Pol.is. And then also the movement “g0v” [gov-zero], and everything what Taiwan is doing. And then I realized: Wow, so digitalization of democracy can be a good thing. Because by now, and back then, when we think about digital democracy, we think about all these platforms as you said: designing maximizing attention, engagement, polarization, calling disinformation, deepfakes, and also contributing to the rise of populism.

  • But then I heard about Taiwan and about Pol.is. And I got a video call with someone who developed Pol.is…

  • Colin Megill. He told me about that. And I also wrote then in my first book I called “Smartphone Democracy” about this case. And he showed me this live demo. And it was really astonishing, as you said. There he showed me this example: Do we want to have Uber in Taiwan or not? As you said, the example of ridesharing. And it was really stunning. He… then I saw this plurality of opinions clustering. And how these clusters are connected and where the bridges are. So it really shows this plurality of opinions. And also as you said, there was no reply field to avoid the troll comments. So my conclusion was then: It’s really all about the design of technology at the end. The political design. Do you agree with that? Or is it too easy to say that?

  • Yes. It is… In fact, we’ve discovered again and again that it requires two essential ingredients. One, you need a pro-social designed space where people add to each other instead of detract from each other. And second, you also need air cover. The pre-commitment from the committed listeners that if our people really do agree on everything, or at least roughly agree on everything, then we commit to implement that particular everything. And the two feed onto each other. Because to give no trust is to get no trust. So it’s always easier to start in the gym, to try a local issue first, a hyper-local issue first, and then to start building trust by offering more pre-commitment in larger pro-social spaces that invites more pre-commitments and air cover, so on and so forth.

  • And you said… once you brought up another example in an interview about… one of the topics was: Should Taiwan change its time zone?

  • Did you also discuss that with Pol.is at the end of the day?

  • It was on the Join platform which was built after the vTaiwan platform as an institutionalized national participation platform that reaches I think 10 million people… or half our 24 million people country. So it’s a pretty wide platform. And anybody with 5,000 signatures—e-petition—can force a conversation, a response from a minister. But this particular case is different because it was actually two petitions. One, more than 8,000 people strong, says let’s change Taiwan’s time zone to that of +9, the same as Japan. And another 8,000 people strong, saying let’s remain in UTC+8. So it’s actually much more, 16,000 people or more.

  • And we invited the people who comment on Join—which uses the same design: no reply button, you can like, you can unlike, there’s two columns, one for the best supporting argument, one for the best contra argument—and then we invite the people who post the best of those two ideas to an in-person conversation. And it turned out they agreed far more than they disagreed. Because we showed them, on a matter-of-fact level, how much it would cost to impose such a time zone change, both one-time and recurring. And then we start brainstorming: If we’re about to spend this much money, are there better ways to achieve your common value? Turns out their common value is to let Taiwan be seen as more unique in the world. But forcing someone to change their clock… okay, I guess it’s somewhat unique, but it gets old very quickly.

  • And so we brainstormed better ways. And we did come up with better ways. For example, hosting human rights conferences like RightsCon. Giving Gold Card residency to people who contribute to open source, open science, and open access. We did that too. We advertised the way that we built our marriage equality by using Uncommon Ground so that the individual wed as individuals but their family don’t form kinship, and we made sure the world learns about that, first in Asia, and so on. So we ended up saving taxpayer money but achieving that “let Taiwan to be seen”—which is a more common, Uncommon Ground for both sides.

  • So in Switzerland we think that digital democracy equals internet voting. And equals e-collecting, the collection of signatures for initiatives and referendums. And we think that’s enough and it’s kind of perfect this way. But yeah, I mean… But the question is, we don’t have this digital, this infrastructure for deliberation or agenda setting to shape opinions. How would you convince Swiss people like: well maybe you can improve a bit on your democracy by having less polarized opinions in the parliament or or or maybe in the deliberation stage before.

  • Well, I’m sure you already have surveys, that’s to say polls, and institutional media that runs such poll results. So in a sense, our way, the Alignment Assembly, from one side looks like a regular citizen assembly. But on the other side, it looks like a poll. So you can actually think of it not as deliberation but rather as a survey that surveys not individuals but groups of ten. It’s as simple as that.

  • Because if you individually survey people, they tend to be extreme. They’re like “YIMBY! Yes, in my backyard!“ “NIMBY! Never in my backyard!“ But if you say to a group of ten: Only the idea that resonate in the room has a chance to pass out of the room, to go out cross-pollinate other rooms, everybody become MIMBY—“Maybe In My Backyard”—if you do this, if you do that. Right? So again there’s a function of the space, not the function of the people. But if you publish the result of such a deliberative poll, it looks exactly like a regular poll. And so all the existing ways you integrate polls into your democracy still functions. So the point here is not to reinvent some wild new form of digital democracy. It is to ensure that we can come to see each other on overlap, not on outrage, using pro-social design.

  • But still, I mean, our main digital communication infrastructure is provided here in Europe and possibly everywhere mostly provided by some American big tech companies. Since already 20 years or 15 years or whatever. How can Europe be motivated or what could be the incentives to design their own market for such technologies promoting trust and also promoting consensus on also showing plurality of opinion as a chance, as something good? Um, how can we get there to create a market in Europe? How… where should we start?

  • Well first of all, I use Proton, which last I checked is based here, not some digital colonialism. And there are many platforms like this. Proton is also Taiwanese-founded so I have a special affinity.

  • But there are people who build such alternatives but really struggle to solve the problem called the “cold bootstrap.” Because most people are trapped on those large platforms already. There was a study a couple years ago of the US average undergrad person using TikTok. And if you want to convince them to quit TikTok, you have to pay them money. Turns out on average you have to pay them $60 a month for them to quit using TikTok. They’re that addicted.

  • However, if there is a magic button you can press and everybody they know also moves off TikTok together… they’re willing to pay you $30 a month for that to happen. So it’s a trap. Literally a product market trap. Because everybody is losing utility. But the first one to move out, because the community is held hostage, they lose even more utility, so nobody wants to move out. Which is why we work, for example, through the Project Liberty Institute with Governor Cox in the state of Utah in the US. They passed this year the Digital Choices Act.

  • Starting next July, if you’re a Utah citizen and you migrate from TikTok to say Bluesky or Truth Social—both are open source—the old network is required to forward new likes, new reactions, new followers to the new network, just like network portability of telecom numbers. And so you can think of this as the individual moving on the off-ramp but still keeping their community with it. And it forces them to then compete on the quality of care. And I have every belief that it is then the open source, open participation platform that will win this race to the top, not to the bottom of the brainstem.

  • But I think Europe is trying too, no? Like with the Digital Markets Act, having interoperability…

  • Well, I mean, there are two differences. One, the EU is doing another round of consultation to expand from instant messaging and group messaging to social networks. So that’s the main difference. The other is that the Utah law applies to all social media, not just the very large social media. So it gives everybody a real incentive to build on the infrastructural layer, on the protocol layer, instead of on the platform layer, and just hoping that you don’t reach the very large operator status.

  • So you’re a cyber ambassador promoting this model of digital democracy for the rest of the world. But I was wondering when I’m reading your interviews and also looking at the global reality… doesn’t this require something specific? Is there a strong cultural component in the Taiwan digital democracy model?

  • Like, does it require specific skills from citizens, tech sophistication, tech fairness, a strong sense for the community? And also quite an incredible amount of—as I said—some young people and some older people having free time to participate online? So I feel like… is there something where you think it would work somewhere else too?

  • Yes. And we also have Japan joining the ranks. It used to be in the Tokyo Metropolitan Area. I’ve worked for many years now with Governor Koike-san to run the Governor’s Cup Hackathon, which is modeled after the Taiwanese Presidential Hackathon. And one of the very young people reading the Plurality book, Takahiro Anno-san, 33-year-old at the time, last year, a Sci-Fi writer, AI engineer, decided after reading the book to run for Governor, to put this platform into work.

  • But he did not have a platform for the election. And so he crowdsourced his platform. Saying anybody hashtagging #TokyoAI can be part of Anno’s platform. And so he used exactly the same tools: Pol.is, Talk to the City, and so on—and one month before the governor election he launched it. And by the election day independently the think tank ranked his crowdsourced platform the best, even better than Koike’s.

  • But of course Koike won the reelection. But he, Anno-san, got more than 1% of the votes. And then Koike-san invited him to do the Tokyo City consultation for Tokyo 2050. And then he formed a new party, Team Mirai, the Future Party. And with 2.5% of write-in votes for his new party, he is now a House of Councilor member, a Senator in the Japanese Diet, and putting this digital democracy again at the national infrastructure level.

  • I also worked for the past couple of years with Governor Gavin Newsom of California and his first partner Jennifer to work on an Engaged California platform. Initially, we built this to have a conversation with teenagers and even younger children on social media usage because it is a classic case of “something about them without them.” And so we want to include them and their parents. But on the week of launch, we did not get to launch because wildfire happened in Los Angeles. So we pivoted instantly to have a conversation using bridging algorithms for the survivors of the Eton and Palisade fire to talk about how to mitigate and prevent future wildfires. And it produced, you can check the report online, very good “uncommon ground,” surprising common ground, around all the different measures that the state government can take. And encouraged by this experience, they are now writing it into law so that it becomes a state-level infrastructure. And they are also consulting the internal state employees so that they can suggest better ways to use AI in their line of work. So it’s like government efficiency—like DOGE, but bottom-up, not top-down.

  • So Japan and California, last I checked, are both much larger than Taiwan. So we are no longer the largest polity with digital participation infrastructure.

  • So, I still have some doubts about this willingness of being permanently involved, like discussing this deep level online on a permanent level, like from the young people for example. So I mean, it’s hard to imagine that the AI Assemblies you mentioned… That would be something very interesting also here because we have, we start to do the AI regulation process, policy process as well. The AI Act is being implemented on the EU level as well. So are young people really willing to discuss how Artificial Intelligence should be regulated?

  • Yeah, definitely. And Finland just ran one this September: “What do youth think, Finland?” when it comes to AI. Again using the Pol.is technology for sensemaking.

  • I think young people by and large feel that the AI systems are out of their control. Like it is somewhere else. So if the tech-broligarchy decides to steer the AI system towards some other future sacrificing the young people, the young people really want to take that steering wheel back.

  • And so I think the key here again is air cover, is commitment. When people know that whatever they do agree on will be treated as real red lines by the Minister of Digital Affairs at the time, yours truly, then people really pour in with the most nuanced, most considerate ideas. So again, pro-social design is the first step, and then you have to offer the air cover.

  • So now speaking of AI regulation again. How does it look in Taiwan? Is it about you defining now also design principles about how this technology or large language models should be designed? Is it in the law? Is it like you define, it’s not a technological neutral way as we say in Switzerland, you also you do define the design?

  • Yeah. Our AI Basic Act is on its way to the final reading. So fingers crossed. I think it’s down to the last clause. And I think the negotiation throughout the AI Act conversation in Taiwan put an emphasis on interoperability, data innovation, data reuse, and things like that. I think because Taiwanese people really do like this vision of Personal Computing.

  • Because we made many of those PC compatibles back in the 80s. And it was a very different era of course. But I was born in a place that basically fostered this counter-cultural thinking that you do not need to rely on the mainframe computer, so the big state or big corporation monitors every key you press into your terminal—which was a fancy way to connect to the cloud back then.

  • And we want Personal Computing, which means you can install your own spreadsheets, your own desktop publishing. You can share your fixes, your patches to the software called Apache and then form this movement together of Free and Open Source Software. And so without Personal Computing, there is probably no FOSS movement. These two work hand in hand. And so nowadays we’re already looking at energy-efficient small language model, such as the ones that Taiwan National Development Fund invests in. We invest in more than 100 such models. Some of them are so energy-efficient that they are not using the transformer architecture. They use Retention Network, Power Retention and so on. So that it can run on your Swiss watch. It uses constant memory. But it is still top of the world when it comes to performance on inference. But this is not a tech seminar so I will stop here. But this is a topic that I am deeply compassionate and passionate about.

  • Sorry. Moving on. Catch up. You told me in a preparatory talk like when I thought about what you do in Switzerland, you told me like: Okay you have direct democracy. But you also have sometimes on public votes slight majorities of 51% or like 49% no. That would not be enough in Taiwan. You really aim for having 80%, like also for the AI regulation law you want to pass these days, should be at the end a minimum of 80%.

  • Yeah, people can live with it. Which is much easier to achieve than complete consensus. Which only ensures that the people with the most time wins because it’s endless negotiation. But if you only ask: Can you live with it? You can very quickly get a snapshot. Like a group selfie. And then you pass it. And of course there’s some unintended consequences. But that’s fine. Just take another group selfie. And then if you take a group selfie quick enough, it becomes a selfie movie. And so that is how the people see themselves reflected. Because there’s nothing that’s more powerful than your few words written on Pol.is, on Join platform, and then next week become policy. It is actually very much empowering for people younger than 18 to see their names on the credit list for national level debates and policymaking and so on. And then they put much more time into the steering wheel, not the hamster wheel.

  • Very good. Let’s talk about some challenges you’re facing with your friendly neighbor.

  • You don’t get that anywhere else.

  • You don’t like the word “disinformation campaigns.” You prefer “polarization attacks.” Which is something Switzerland and Europe and every country else is facing too, being constantly this hybrid war all the time. How can we achieve this digital resilience towards deepfake and disinformation—I call it now—and polarization as well because we also have some neighbors maybe not that close but somewhere?

  • Okay. They are widening their free service now. And it’s because of language models. It used to be that you had to speak the target language to successfully mount an attack. But now malicious AIs not only speak all languages so well, even the memes, that’s to say pictures, can be translated perfectly into any target culture. So it does widen the aperture for polarization and disinformation attacks.

  • In Taiwan, we basically invite these as topics for conversation. Social objects. We pre-bunk instead of debunk messages. Debunking is after the fact. And it’s bound to polarize some people if you debunk.

  • And you create a deepfake first of yourself to show?

  • Yes, I deepfake myself. Yes. So around three years ago, an actor played me in a deepfake. And it was kind of a fake deepfake because it took a lot of computation actually to calculate that. But we say that in the future, this will take not 12 hours, but 12 seconds to deepfake me completely. And maybe 12 milliseconds in which case you cannot tell whether somebody having a video conference with you is human or not.

  • And we say to the people: This is coming. And the deepfake audition says shift from the content level—where you basically assume anything can be synthetic—to the behavior level and the actor level to ensure there’s digital signature to look for behavior patterns, for example disclosing where this message is coming from.

  • We embrace the idea of “meronymity” or partial anonymity. So not everybody signs their name on their message, which is very bad for whistleblowers, doxxing them. But it’s also not complete anonymity because our constitution never offered freedom of expression and amplification for foreign robots. And so at least you need to disclose that you are a human and the region where you connect from.

  • And so all this meronymity, all this selective disclosure so you can prove that Ich bin ein Berliner without revealing your street address in Berlin, for example. All these are fundamental infrastructures that taken together offer a very strong pre-bunking infrastructure instead of debunking.

  • And how fast should be this pre-bunking or debunking when you see there is something coming up. I saw something within two hours. That’s Taiwan reaction time.

  • No, we upgrade that to one hour. So yeah, we have this 2-2-2 principle. Whenever the citizens through the collaborative fact-checking grassroots network Cofacts detect that there is a viral polarization or disinformation attack, then within two hours, we need to roll out two minutes of video or two pictures of 200 characters or less each that is funnier than the disinformation. It’s called “Humor over Rumor.”

  • Because if it’s funny, it travels faster than outrage. And if you roll it out within the two hours, you can do a Tenet move and pretend you’re doing pre-bunking. Because for most people, this pincer attack means that they see the polarization after they see the humor. Right? So for them it’s pre-bunking, subjectively speaking. But if you act after two hours, it’s too late.

  • And you force those platforms like Facebook and YouTube to collaborate on this selected service disclosure and identify scam and fraud and everything. And if they don’t, the sanction is slowing down their connection?

  • To slowly slow down their connections.

  • Have you ever once said that to the EU Commission for example?

  • Well I listen. I mean radical interoperability and throttling is not stifling innovation, last I checked. It is not censorship of content, last I checked. Last I checked the state of Utah is conservative, last I checked. And so this is not something that is progressive versus conservative. Not something liberal versus the right. This is something about our fundamental freedom to move across services. So the best service gets our business. I think this is a very strong argument and I have not heard from any MP here in MEPs to say that oh I’m pro-fraud so I don’t want this kind of measure. Right? So I think the anti-fraud angle is definitely the uncommon ground, the surprising common ground that can unite the left-wing and the right-wing into that form.

  • Okay. So a way to enforce a legal framework. Speaking of frameworks, in the EU there’s this shift in the public discourse on the narrative about regulation. Like regulation is hampering innovation. So we need to get back our digital sovereignty. We need to build up our tech companies. We need to invest in our tech companies. We need to have fewer rules. Do you think, like when I hear what you’re saying, it’s actually not this way because it’s all about protocols?

  • Yeah, I think I pre-bunked that question.

  • Do you think so? So frameworks like GDPR, Digital Services Act, Digital Market Act… They’re not really, are they obstacles to our digital sovereignty?

  • I think the difference between a guardrail that stifles innovation and a protocol that fosters healthy competition and innovation is that the protocol must always come with not just a guardrail against something, but a guide rail, an alternative.

  • As we are the Nobel for alternatives, I heard, the Right Livelihood Award—the idea is to come up with something on a small scale that actually works. So for example I’m working with a team to build this pro-social ranking algorithm that can bridge Truth Social and Bluesky together, the example I use on stage.

  • It’s called “Green Earth” because Bluesky, Green Earth. And Green Earth uses language models so you can tell it, you know, I prefer biblical creation care or I prefer climate justice framework and so on. And then it figures out the connective tissue for the ranking system to be maximally bridging. And once we have that good alternative running on Bluesky, actually it pressures X and other social media companies because if they don’t adapt, and the policymaker understands there are viable alternatives, then they can set the floor at those alternatives. Making it impossible for the big tech not to adopt such open source initiatives. So it’s like forking the platforms into protocols and then forcing them to merge back.

  • And you have this conversation with Bluesky now to enforce this interoperability?

  • Well, Bluesky was designed around interoperability. It’s the AT Protocol. Right? So the AT Protocol can federate with the Fediverse, with Nostr, Farcaster, Lens—I’m missing a few. But it’s all the big interoperable web. And the lesson here is not that we need a Euro champion to replace those colonizers. But rather we need the Euro stack that is decentralized and democratic. And so that we can defend democracy by saying once you’re in Europe, you need to be like utilities, at least offering off-ramp and on-ramp.

  • Okay. So if the pressure from the EU Commission is not working, then at least you contributed then to this openness and plurality again.

  • So the very last question. In the book Plurality you criticize how Western democracies have failed in the past to provide digital services. And I was immediately thinking of the pandemic, how European countries or Switzerland or the UK or other countries have managed the COVID-19 monitoring by sometimes using the fax machine. What would you think when you heard that? And what should be the core infrastructure from the government to be prepared for the next pandemic?

  • Well for Taiwan, COVID-19 was the second pandemic. We were hit by SARS the most, and suffered the most of all countries when SARS happened. And when SARS happened, our health card was literally a piece of paper with six box on it. So you can stamp the date you enter a clinic. So it’s worse than the fax machine, speaking from personal experience.

  • And it really is the case in Taiwan that because of the SARS experience, it propelled us to adopt the Central Epidemic Command Center and the digitalization of our public health system and investing in contact tracing, vaccination, mask rationing, and so on and so forth.

  • And so I believe that in Taiwan we’ve had years of citizen assemblies and also online Pol.is conversations about exactly the boundary to draw between say public health and privacy. And we did that before the community spread in Taiwan. So when the community spread did come in Taiwan, we knew exactly what to do. The civic activist invented a system based on zero-knowledge—meaning it doesn’t learn anything—when you check in a venue using a SMS QR code. The venue owner learns nothing about you, not even your phone number. Your telecom only learns a random number, they don’t know which venue it corresponds to. The state learns nothing after 14 days where it’s erased. But it does let us recursively notify people when they were in places that had community spread. And it’s entirely voluntary. However, it served us very well until the first wave of Omicron.

  • So I think it is in these non-crucial times, in these more chronic times, that you have this national conversation, maybe with some alignment assemblies, around these very important boundaries, the red lines to draw. And once the pandemic or some other disaster happens, then the red lines become also the bright arrows to point out the kind of solution using technology of that day and age that we already know the pre-commitment of the people, the legitimacy of the people.

  • Yeah, we should have had this famous mask app where you can see the availability of masks in all the pharmacies.

  • Open API. Make it permissionless.

  • Yeah. I think we make a point here. I’m sure we could continue so much conversation for ages.

  • Um, you promised to tell me an advertising slogan at the end. Do you remember? It was not “Humor over Rumor,” it was something about: Let’s make…

  • …something great again?

  • Yes. Let’s make digital democracy great again.

  • Yes. Let’s make digital democracy great again. Thank you.