-
I want to start by talking to you about your childhood. Let’s go right back to the beginning. How did that shape who you are today?
-
I was born with a heart defect. When I was almost five years old, doctors told me and my family that I only had a fifty percent chance to survive until heart surgery. So for the first twelve years of my life, I could not get too upset or too happy; otherwise, I would faint and find myself waking up in the hospital.
-
I learned breathing exercises, to meditate, and to make sure that I didn’t get too joyful or too angry. And I learned that before I go to sleep, I must publish everything I learned every day. I call this “publish before I perish” because it felt like a coin toss; if the coin landed the wrong way, then maybe I don’t wake up. So first on cassette, then floppy disk, and finally on the Internet—I didn’t accumulate—I just shared what I learned that day.
-
That’s an extraordinary response for a child, and it must have been frightening for you to grow up in that shadow.
-
Yeah. And I think I was only able to cope with this situation because I really learned—especially on the Internet—that imperfection is actually an invitation. If I post something too perfect, people just press like and then they scroll away. But because I didn’t have time to be perfect, people saw those vulnerabilities, those cracks, as invitations to chime in and to contribute. And so I made many good friends simply by publishing the drafts that I did not have time to perfect during the day.
-
And you must have acquired quite a strong sense of discipline. I mean, I can’t imagine saying to my child, “try not to get too happy or too sad.”
-
Well, I mean, the thing is survival skills. You keep practicing. And if I did not practice, if I fell outside of the homeostasis, the stability, then—as I mentioned—I would just find myself waking up in the hospital, and then I’d have to try again. So with time and practice, I think I got pretty good at tranquility and seeing everything as a kind of balance—in a way, what the Daoists say is “effortless action.”
-
And so your tools during this time really were reading philosophy and the Internet, and the odd floppy disk.
-
Yes. Thanks for the framing. That is pre-Internet. But in my childhood, we already had dial-up bulletin board systems. So most of my early education was reading, for example, from the Gutenberg Project, where people digitized the works that fell out of copyright. And because at the time the literature from World War I and World War II were still in copyright, I read the more halcyon days of the Enlightenment and so on.
-
Do you still consider yourself to be philosophical?
-
Certainly. In fact, now I am a Senior Fellow at Oxford in the philosophy faculty, in the Institute for Ethics in AI, to steer the machines from maximizing some number—utilitarianism—into a different ethic of care.
-
I want to come back to that in a few moments. But first, let’s keep up with your life story here because you were involved in a protest movement in 2014, and that led to you working with the government in Taiwan. That’s quite an extraordinary journey in itself.
-
Yes. So it started, I guess, as a protest in March 2014, where people did not like the fast-tracking of a trade deal with Beijing that would have invited Huawei, ZTE, and so on into our 4G infrastructure and our publishing industry. But very quickly, we turned our protest into a demo. Not just against the trade deal, but for a new way of democracy that invites the half-million people on the street and more online to—in small groups of ten—have a real conversation about what preferences were acceptable when dealing with Beijing and trade deals in general.
-
So after three weeks of nonviolent occupy, we actually converged on a set of uncommon ground—surprising common ground that people could live with. And the Speaker of the Parliament at the time said, “Well, the people’s version is better than the MPs’ version, so let’s go with it.” So it’s one of those very rare occupies that’s not just about taking something down, but building something new.
-
Did you expect it to work?
-
Yes, certainly. So I translated some works from Manuel Castells at the time. He specialized in network communication theory and analyzed many occupy movements around the world. So already at the time, we understood that if we use social media—the antisocial corners—it only amplifies the polarization, the extremes. But the way we constructed the conversation network through broad listening, not just broadcasting, was to amplify the middle ground and meet the extremes. So we see conflict and polarization not as fires to be put out, but as kinetic energy for the geothermal engine to create something that people can get behind, no matter how large the ideological gap was.
-
Did you start out thinking this was going to be an act of rebellion, or did you always intend to shape your country legitimately?
-
I think this is a little bit of both. It is a rebellion in the sense that it’s direct action. We seized the means of communication. We literally bridged the Internet connection. I personally brought 350 meters of Internet cables to the occupied Parliament.
-
Did you? And you connected everything?
-
Yes, exactly, so that people on the streets could see the occupied Parliament. And this is called “humor over rumor,” because the rumor only spreads in the vacuum of communication. But if you can make the livestream interesting and humorous, people feel engaged and they’re really using the room, overcoming the polarization attack, the disinformation and so on. We always expected it to work as a demonstration. So the rebellion was the direct action, but the demo was of a new form of democracy.
-
And so from that moment, you started working with the government and you helped to introduce a lot of changes that dramatically improved the approval ratings of the government. This is in the sort of five years leading up to the pandemic. Tell me about some of those initiatives, things like vTaiwan.
-
Yes. In 2014, when the Sunflower Movement happened, President Ma Ying-jeou was at a 9% approval rating. So in a country of 23.5 million people, anything the President said, 20 million people didn’t really trust. However, we understood that “to give no trust is to get no trust,” the government needed to radically trust the people. And through the vTaiwan process, we did exactly that.
-
We realized, of course, that we cannot physically occupy the Parliament or government building for each controversy. So we replicated the process from the three weeks of occupy online. The vTaiwan process informs the divided ideas—from, for example, Uber drivers and taxi drivers when ride-sharing first came to Taiwan—and uses the same method to amplify the smaller group ideas that can cross-pollinate across ideological differences. For example, saying that undercutting meters is bad, but surge pricing—raising the price—is fine. So that is a bridging idea. And by making the bridging ideas viral, not the extreme ideas, again after three weeks of nonviolent communication, we agreed on a set of laws that was not just fair to taxis and Uber but also took care of the rural places. And so after a hundred or so of such collaborative meetings, by 2020, the approval rating of President Tsai Ing-wen’s administration at the time was over 70%.
-
So, a massive difference.
-
And I would also say it is because we involve even people younger than 18 in the process. When I was 33 in 2014, I was a reverse mentor—a young adviser to a cabinet minister, designing vTaiwan and such—and we institutionalized this. Each cabinet minister must have people younger than 35 to advise them. So two years after working on vTaiwan, I became 35—too old to be a reverse mentor—and they elevated me to minister without portfolio. And then I started working with reverse mentors who are sometimes 17, not even 18. And they crowdsource their ideas; with 5,000 people joining an online petition, they can force a deliberation from the ministers. For example, the high schoolers petitioned to get their school hours starting one hour later, because one hour of sleep gets you better grades than one hour of study. And they did get it. Several leaders who started such petitions became reverse mentors to relevant agencies.
-
And you went on to become Taiwan’s first Digital Minister, didn’t you?
-
Yes. So, I was digital minister without portfolio from 2016 and we founded the Ministry of Digital Affairs, or moda, in 2022.
-
And then the pandemic hit, of course. Do you think that Taiwan was well-placed to handle things like the misinformation and the fake news that exploded at the time because there was so much fear and uncertainty?
-
Certainly. We understood that the only way to counter these polarization attacks, which, by the way, Taiwan has been the top target for the past twelve years straight according to the V-Dem institute, is through journalism. And I don’t just mean institutional journalism. Of course, that’s very important; my parents are both journalists. However, I also mean civic journalism.
-
In 2019, we changed the curriculum of our basic education. All our primary schoolers and high schoolers learn not just media literacy—receiving information—but rather media competency: producing information. They learned the whole journalistic process: fact-checking, balancing perspectives, balancing bridging, as well as measuring the air quality, water quality, noise levels, and publishing it. So they share the same common knowledge, knowing everybody else knows this as well. And so entering the pandemic times, we repurposed this civic infrastructure instantly so people could visualize, for example, where the next available masks were, and people together depolarized the conversation around masks, anti-masks, vaccine, anti-vaccine, contact tracing, privacy, and so on. So uniquely, we only lost seven people in the first year of the pandemic, and we never locked down any city during the three years, and TSMC and all the factories kept running.
-
Fast forward to 2024. You were still Digital Minister at this point, but there was an election in Taiwan, and there were fears about sabotage by Chinese-sponsored fake news particularly. How did you preempt this, building on what you’d learned during that time?
-
Well, I was moda minister until May. And so because of this response, not just the election times, but also right after election times, there was a spike in the deepfake scams on social media. However, we pre-bunked these attacks, so it is not debunking after the fact. But rather, already two years ago, I deepfaked myself.
-
Did you? Were you nervous about doing that?
-
Not at all. I had an actor play me, and they basically spoke as me. And then we showed exactly how this is done. And I said, you know, this currently takes two hours on a laptop to produce, but soon it will only take two minutes, or two seconds, to twenty milliseconds. And when that day arrives, you cannot really trust anything on content alone. You have to check digital signatures. You have to check the behavior, not just the content. So people already had inoculation against deepfakes in the 2024 election.
-
So arguably, the attempts—and there were plenty of attempts accusing of election rigging and so on—all backfired. Our president now, William Lai, got elected with numbers better than the polls predicted, at over 40%. And we also learned that regarding these attacks from foreign sources and deepfake scams, the red lines can be drawn by asking the people what to do together. We sent a text message to 200,000 random numbers around Taiwan inviting a microcosm of Taiwan—around 450 people—and they together talked about mandatory digital signature liability for investment scams, and throttling connections to TikTok if they do not agree to our liability rules, which we put into effect last year. So throughout this year, there have just been no deepfake scams anymore on social media in Taiwan.
-
So it was a success. And what was the role of AI during that 2024 election? How did you use it as a tool?
-
Yes.. As I mentioned, this conversation to draw the red lines around the fakes online was facilitated by what I call “Assistive Intelligence.” So it’s also AI, but it’s civic AI, it’s communal AI. The AI is not a huge, see-everything, do-everything, authoritarian ruler that replaces human relation and judgment, but rather it’s more like a facilitator. Like a glorified chess clock. In a room of 10 people, they can see not just the transcript and how much they agree on, but also what other people’s ideas are and how their ideas can combine together. So because of this AI facilitator in 45 rooms, for them to produce a sense-making report usually takes days, but thanks to assistive intelligence, we produced that on the very same day as we held this alignment assembly. And so by the end of the day, those 400 or so people voted on it and showed everyone that more than 85% of people engaged in this process agreed with the final bundle of policy, creating a legitimate consensus.
-
Do you think the examples that you’re sharing of the way in which you’re using tech in Taiwan, in governance, in democracy—you have a population of 23 million—do you think that could be shared across other countries, perhaps larger countries? How about the US, for example?
-
Yeah, definitely. We have just concluded a very successful experiment working with Google Jigsaw unit as well as Napolitan Institute in the US, called “We The People 250,” in which two conversations—one on freedom and one on equality—were held by five people from each congressional district in the US. So together, these thousands of people were a statistical representation of the entire US population.
-
It turns out using assistive intelligence, people share their personal lived experience of what freedom means, what equality means, and we were able to build a “social translation” between people who care, for example, about climate justice, and people, for example, who care about biblical creation care. And they both—although they often talk past each other on social media—were then able to see through social translation that they actually share the same concrete actions and can understand each other using these bridging translation layers. So I’m very happy to report Taiwan is no longer the largest polity to try these methods. And in Japan, the same broad listening ideas have propelled Takahiro Anno, a 34-year-old, to a house of councillor seat. And his party Team Mirai, using this broad listening way, is gaining support.
-
Can I ask what you’re doing now? You left your post in the government last year. What are you doing now?
-
So, last year, I switched from the cabinet position to a diplomatic position, and became Taiwan’s cyber ambassador. So “Cyber” is a Greek word that means “steering”—coming to steer. So I travel around the world. I’ve been to 27 countries in the past year, changing time zones every four or five days, sharing the Taiwan Model. Many democracies now feel, as we did ten years ago, that antisocial social media is polarizing the citizenry and gives rise to cynicism—especially among young people—populism, extremism, and so on. And so people are very much keen to hear the Taiwan Model where we rebuilt intergenerational unity, making sure that young people’s anger and outrage can be turned into co-creative energy, instead of something that’s more volcanic.
-
There’s a lot of mistrust about big tech, isn’t there? In other countries, particularly in Europe, there’s distrust in the companies and how people are being caught up in using their enormous platforms. Do you think that that’s a difficult culture change? Because that’s very different, isn’t it, to the way you’ve approached things in Taiwan?
-
Well, I was born in the eighties. Taiwan helped launch the personal computing revolution. Before that, people didn’t really type into a computer. They typed into a terminal which connects to the mainframe—a large computer owned by Big Tech or the Big State—and the administrator knew every single key you pressed into the terminal. And if the administrator did not want to upgrade software or install new software, you couldn’t do it.
-
However, with personal computing, people understood that even though it was a floppy disk and there were limitations on what you can do, people could make their own spreadsheets, they could make their own desktop publishing software. And before long, people were changing the conversations from mainframes to interconnected personal computers on the World Wide Web. And because of that, we now have an Internet culture where people can innovate without permission from Big Tech. So we’re seeing more or less the same movement this year away from the centralized Big Tech AI models toward a more horizontal way of communal models that can run on a laptop, on a phone even, and are very energy efficient. And also, you don’t need to be monitored by Big Tech for whatever you type into your chatbot.
-
What’s your view about governments and regulating tech? It seems that lots of countries tie themselves in knots trying to do it. They can’t move fast enough. They’re intensely lobbied by the big tech companies. Do you think it’s working?
-
Well, when we set up the moda, uniquely, we placed ourselves in the Parliament’s Transportation Committee. In other countries, it’s in Science or Economy, or if you talk about cybersecurity, in the Interior. But we put ourselves in Transportation because we believe in interoperability, which means that all the highways need to have on-ramps and off-ramps.
-
It’s a very simple idea that the MPs in the Transportation Committee understand: if you only have a highway but no off-ramps, it’s not really going anywhere. Right? It’s like a hamster on a wheel. There’s no way of steering the wheel. So through interoperability, we understand, for example, if you have a new telecom company that offers better service for a region, if you switch from a large telecom to the small one, you are able to take your number with you. If there’s no number portability, there is no incentive for the bigger telecom to innovate because they can just capture their users. And the same goes for social media companies as well.
-
But we’re seeing, for example, in the state of Utah in the U.S., they passed the Digital Choices Act. So people can move, for example, from Facebook to Bluesky or to Mastodon and keep their community’s wisdom, keep their interactions, their followers, their replies, their reactions. And so this offers an alternative. Then there’s a motivation for Big Tech to innovate, to care more about how people use their services. So to your question, I believe the regulators should not just say, “Oh, we want a national champion, we want our own Facebook,” but instead regulate the existing Big Tech like utilities, making sure people can move freely across them just as we can port our numbers across telecoms.
-
Are the tech companies—and I suppose more importantly governments—doing enough to protect democracy, do you think?
-
Yes. I think democracy needs to be revitalized. In Taiwan, we see democracy like semiconductor, a social technology that you can introduce a new upgrade to every few months. Because the current form of democracy really only allows people to express their opinion in very low bandwidth. If you vote for one person out of, say, eight candidates, that is just three bits of information. And you have to wait four years before you can cast your next vote. But the world is changing so quickly. It is impossible for this very low bandwidth picture to capture what people truly want—for example, when deepfakes hit, or the infodemic, or the pandemic, and so on.
-
So we need to continuously improve the bandwidth of democracy so that when people see, for example, the deepfake scams, we can very easily convene such assemblies together. Think of it like a poll, but instead of an individualized poll where people tend to be more extreme, it is a poll to a group—like a group selfie—that generates new ideas that people can live with. So if we upgrade democracy this way, then whatever AI innovations appear, we also use this to improve governance. It’s called scalable governance. And listening at scale.
-
So you think that should replace the four-yearly election?
-
I think it should augment MPs. There’s an idea called the “double diamond” from IDEO. The idea is that we can use these new processes to discover new issues and define people’s common values, but the development of policy and the delivery of policy are still in the Parliament and in the executive branch respectively. And so the first diamond, the one that is more exploratory, can of course admit much higher bandwidth, but for consistency, I think the second diamond at the moment still belongs to the existing political institutions.
-
Are we taking the threat of AI seriously enough?
-
Well, I think for many people, the threat of AI is already here now. It is not sometime in the future. There are people who, for example, report—one in seven people, I believe—that somebody close to them, according to the CIP.org global dialogue, encountered reality-distorting episodes through synthetic intimacy. So that is a big problem for many people already today, not just young people, but also people who have vulnerabilities, who suffer from trauma, and so on. And we also know that they trust their chatbots much more than the companies that make the chatbots, which creates, I think, a 30% difference. So I think a way out of this dilemma is instead of saying “ban chatbots,” we should make sure that the chatbots are aligned to the community’s needs, not to the Big Tech’s needs. If you don’t trust Big Tech to make a mainframe computer, the obvious solution is not banning the use of computers, but rather switching to personal computing.
-
And is that what you mean when you say that AI alignment is fundamentally flawed?
-
Yes. What I meant is that if you rely on a few Big Tech people to anticipate the local cultural communal needs, using very thin rules—which is called the “context-free rules” that they must follow to fit all those different cultural expectations—of course, it does not work. It is exactly the same issue as expecting that simply voting among eight candidates every four years can capture the response to all the emerging issues. So it is fundamentally flawed. I think it will be much better if the designers of AI systems stay humble and, instead of making tech progress at the expense of local communities, empower the local community in what I call “techno-communitarianism” so that each community can steer their own AI model toward their own local norms.
-
So it should be more accountable to citizens rather than corporations?
-
That is exactly right. And I also think the corporations are now more or less moving in this direction. In the UK, you talk about “muscular adoption,” right? So it’s not about training your own champion super-intelligence, but rather making sure that you can shape the norms, adopting the AI models in all the different sectors so that it is answering to the terms of service from those sectors, instead of terms of service set by the big companies.
-
Where do you stand on the privacy discussion? In terms of AI, we are told that it needs to have a lot of good data in order to be any good, but some people, certainly in Europe and the UK, are reluctant to share their personal data. Is there a point at which we have to make a bigger decision about how much we choose to share with tech?
-
Well, during the pandemic, we fundamentally refused this false dilemma, this false trade-off between the public good (like contact tracing) on one side and privacy on the other side. During the pandemic in Taiwan, if you entered a venue, you saw a QR code. If you scanned it, it texted a 15-digit random number generated by the venue to 1922, the well-known telecom number for pandemic response. And the thing is, because this is a random number, your telecom actually didn’t know where you had been. When you show that you have texted the number 1922, the venue learns nothing about your phone number or anything about you. It’s called zero-knowledge.
-
If there’s a community spread, we’re able to notify the people who were in the same place, but again, the state knows nothing. If there is no community spread, then everything is deleted after a couple of weeks. So my point is that if you design the data-sharing algorithms right, you can have the public benefit of notifying people when they have been exposed, but you do not have to let anybody else know about the private details—the whereabouts—that they do not know in the first place. So it is a matter of “privacy first” design. It is not a matter of a trade-off.
-
And yet, I remember covering a story during the pandemic of someone who said that their phone battery had died while they were asleep, and within an hour, the police were knocking on the door. It was obviously very effective, but I just can’t imagine that having happened in the UK.
-
I assume you are describing people put in quarantine. So their phones are essentially a proxy of ensuring they stay in the same place during the quarantine. I was describing the pre-quarantine contact tracing method. And I think each culture needs to set their own norms about the appropriate use of such technologies. So during the pandemic, even before we had the contact tracing system, we already organized online conversations using the same bridging technology, Pol.is, to ask people what they feel comfortable with. So to this particular idea, I think before the next pandemic we want to have such national conversations so that people know exactly what to expect because they participated in the making of the system.
-
We see time and time again the way in which technology and social media can bring out the worst in people and can be damaging and toxic to society. And we also see a growing call from some places to ban them or delay the introduction of them. Australia is considering banning social media for 16-year-olds next month. Is the answer to ban it or delay it?
-
Well, I think equally important is to provide safer alternatives. When the ozone was being depleted by Freon, we didn’t just say “let’s ban refrigeration,” which would be quite devastating. Instead, through the Montreal Protocol, people said let’s double down on safe alternatives, and we committed ourselves to switch to such alternatives after X years.
-
And so I think the same needs to happen here. Of course, if you ban anti-social media and promote prosocial media, that’s very good. In Taiwan’s classrooms, most classrooms now ban the use of small touch screens, but each student will have a “one laptop per child” policy so that they have large tablets, large laptops. They see screens as something that they can share with other people to build relational health and civic muscle, instead of isolating oneself. So, I’m not saying that banning is the complete solution. It may be one part of the solution, but it always works best if you can provide a healthier alternative in the same place.
-
I want to talk to you a bit about China and the relationship between China and Taiwan. Does Taiwan have a role, do you think, in promoting democracy in China?
-
Well, we have always worked with people around the world, including people from the other side of the Taiwan Strait, to show them exactly how to deepen democracy. And in Taiwan, even during the martial law era, even before we had democracy, there were already strong movements in cooperatives, in the consumers’ unions, in the local spiritual traditions, and so on. And so all these, I think, are very useful even for people who currently do not enjoy freedom of expression, freedom of religion, or freedom of political expression. They can also organize using more or less the same techniques that I have just shared: broad listening, AI for assistive intelligence, and so on. They can learn to organize even only in a local communal fashion. And the fact-checking network that Taiwan built very successfully to counter the infodemic can also be adopted by people in authoritarian regions, maybe first only checking food and safety issues or disturbing rumors about environmental pollution and so on. And once they build the civic muscle, of course, then it paves the way for more political possibilities downstream, just as Taiwan did during the eighties.
-
Taiwan is bombarded by cyberattacks from China every single day. How are you managing that onslaught?
-
So we enjoy “free penetration testing.” As I said, 2 million attempts every day. Usually, you have to pay for this. We get it for free. And we do that by making sure that there is resilience, not just defense. Defense is fencing out; we don’t let those attacks go through. Resilience means we know some of them will go through, but because in every part of the stack we have a plurality of options, even if one particular component is compromised—we assume it’s probably breached at some point—the attacker cannot laterally move to other parts of the stack, and we can swap in alternatives in the same component. So for example, for cellular communication, we have multiple choices; for the hyperscaler, so-called cloud solutions, we have more than three choices, and so on. And so by working with a resilience mindset, each attack becomes some intelligence that we can share with democracies around the world so they can guard against the next attack. Because no democracy is an island, not even Taiwan. So Taiwan may be the front line for such attempts, but all democracies face the same issue.
-
Who do you think is gonna win the AI race? China or the US?
-
Humanity, for sure. If humanity does not win the AI race, there may not be a human race anymore. So we’re all in this together.
-
That’s a very diplomatic answer. I wanted to ask you briefly about the chip war. Computer chips were developed in such a global way, but now it feels a bit like that’s being ripped apart. What’s the future for big Taiwanese companies like TSMC?
-
Well, as we know, TSMC is now expanding around the world and bringing with the TSMC network the cybersecurity—as I mentioned—to trusted networks in all parts of the supply chain. And I think this is good, because this is a sign of Taiwanese interdependence. Chips have become indispensable, and cybersecurity and the counter-information attack practices that we have built—the Taiwan Model—is now shaping the norm for all the suppliers of TSMC, which as we mentioned, spans many democratic countries. And I also think that because Taiwan stands for trustworthiness, it also means that people come to see Taiwan as indispensable not just for economic reasons, but because Taiwan’s chips have critical academic, scientific, military and industrial uses. There are many more conversations that we can have with tech providers around the world. For people who want to do so-called sovereign technologies, who want to build a technological stack that is not dependent on or getting colonized by Big Tech anywhere else in the world, Taiwan is your trusted friend because we do not harbor any intention of colonizing anyone.
-
Do you fear the redistribution of power that is going on at the moment, particularly in terms of big tech? We have a handful of enormous companies generally run by enormously powerful men. Does that concern you?
-
Well, as I mentioned, if you have off-ramps and on-ramps across all the parts of the stack, then no matter if you call it the Euro stack, Taiwan stack, India stack, and so on, at the end of the day, one does not need to worry that much because people always have the freedom of movement among choices. But if, because of lobbying or other issues, people do not offer meaningful freedom of exit, then yes, one can get trapped.
-
A couple of years ago, there was a study that asked undergrad students in the US using TikTok how much they would have to be paid to give up the use of TikTok, and it’s about $60 a month. However, if there’s a magic button you can press and quit with everybody they know on TikTok together, they’re willing to pay you $30 a month. So, obviously, there’s a product-market trap. Everybody’s suffering but nobody wants to move out because of FOMO—fear of missing out. But if the governments have the freedom of movement baked in, then there is no need for each individual to suffer $60 or $30 a month. Rather, people can one by one move to safer alternatives while keeping their communities.
-
How likely is that to happen, though?
-
Well, already the state of Utah is passing acts. For instant messaging and group messaging, Europe already has the Digital Markets Act, and they are now looking to extend that to social media. And already people around the world are looking for AI companies to offer the same—something we call “context portability.” So if I switch from one AI service to the other, anything this AI learns about me should be transferred to the next service, which may be open source, which may be run by the community. So, again, this kind of portability—we need to plan it even before the AI systems become indispensable in our daily life. Just like social media; had we had this portability for social media ten years ago, a lot of the worst antisocial practices would not simply be possible. And so, yeah, I think context portability is something that we actively talk about in Taiwan in our upcoming AI Basic Act and the Data Innovation Act, and I’m sure European counterparts are seeing this as well.
-
But that has been a deliberate thing that the tech companies didn’t want. They don’t want to make it easy for you to leave, do they?
-
I really want to push back a little bit. Because if you are the second or the third largest vendor, then portability is good for you. It is only bad for the top vendor, which often takes, like, 60%, 80% market share. So when we introduce such kind of bridging, mandatory protocols, usually the loss leaders really endorse that. But if you count all of them together, they have equivalent or even better lock-in power compared to the top player.
-
Do you think that we’re going to see a lot of change in the way we use technology and the way technology runs our lives within the next five years?
-
We are already going through a period where many people feel that they do not meaningfully steer the technologies. I think one in ten people report that when they have an extended conversation with an AI chatbot, it is the chatbot controlling the direction of conversation. They feel out of control, which is very interesting considering that it has to wait for you to type the next sentence, but already people are feeling pulled by synthetic intimacy.
-
So, yes, in the next five years, we’re going to see more and more of that—as I mentioned, like a hamster on the wheel. And I think one of the very effective ways to show this is to analyze the human-to-human relationships and whether they suffer. If the AI systems we use empower communities, then we should see less isolation and more people-to-people interaction. But if they are synthetic intimacy replacing human connection, then we’re going to see more and more isolation.
-
That is a fear, isn’t it? Because I’ve heard it said that we’ve never been as connected to each other before as we are now. But equally, there’s never been so much loneliness.
-
Indeed. I turned my laptop screen as well as my phone to grayscale. Because this way, when I look at you across the table, I see you as more vivid than the screen. But if I don’t turn it grayscale using the color filter, then my screen is infinitely more vivid than the realities around me, and it’s difficult for me to resist the addictive intelligence living in my phone. So I think whether it’s about grayscale, whether it’s about screen time, larger shared screens and so on, we need to collectively set new norms about the ethical use, humane use, of technologies. Instead of trapping each other on this hamster wheel.
-
Is that something that might happen generationally over time? You and I are a similar age, and so we’ve both grown up with this exploding around us. But future generations will grow up within it. It will just be a normal part of their lives from the get-go.
-
Definitely, and we also see that the younger generations have a better sense of how much surveillance is going on. They insist actually more on end-to-end encryption, on privacy-preserving platforms, and so on. So I think the idea behind reverse mentorship is that people closest to the pain—that is to say, the younger people, the digital natives—come up with solutions much more readily than we who are digital migrants. So many of the most innovative countermeasures in Taiwan were designed by people younger than 35, sometimes younger than 18. So we need to bring them into the seats of power much earlier. This is called the Pygmalion effect. If you expect them to set the agenda for the country, they mature very quickly. But if you exclude them out of the political powers, then they become cynical, maybe polarized, maybe extremist.
-
But is there a risk that we’re going to be leaving them out if we don’t let them access the digital world until they’re older?
-
Which is why, as I mentioned, it needs to be a conversation with the teenagers. So instead of saying, you know, people under 18 or 16 simply cannot access this kind of software, we need to work with them, maybe set up systems and assemblies. Maybe listen to them at scale and say, “What should be the kind of healthy alternatives that we need to promote not just in schools, but also in families, that they prefer and their parents accept?” And then have those community-oriented technologies be the default and indeed widely available, maybe universally available. We can subsidize—as we did in Taiwan using the Universal Service Fund—the most rural places, the highest mountains, to have not just broadband access, but also the digital competency so that they can create and shape the norms together, instead of just saying “banning the bad stuff” while not promoting the good stuff.
-
TikTok is banned in Taiwan. Have you put that to the vote of young people? And if you did, would they want it?
-
Well, TikTok is only banned in public sectors and places run by public sectors. So if you are an adult using your own phone connection, TikTok is actually not banned. However, the young people vote with their choices. We’re a very rare country in which, across the past couple of years, usage of TikTok did not grow. Rather, the young people by and large prefer, as I mentioned, the Fediverse—or the Federated Universe—like Threads.net, run by Meta. If you post Threads into the Fediverse, and I don’t use Facebook or Threads, I can subscribe to you on some other platform but still interact with you normally. So that’s the off-ramp and on-ramp. And even though we’re just 24 million people, there are more daily active users of Threads than any other country on Threads, which is quite amazing. So by shaping the norm, people can voluntarily move to a platform they understand and expect without the country banning it for everyone.
-
You do a lot of research in Taiwan. A lot of R&D comes from you and traditionally always has. Is there pressure to share a lot of it with China?
-
If you do research, of course you publish. And once you publish, it’s in the commons, in the public domain. It’s not to any specific intellectual region.
-
Do you think every country should have a tech diplomat?
-
I think many people across the world already see that the younger people are effective digital diplomats online. They share their culture, they incorporate foreign ideas into their own culture, and generally practice public diplomacy. So while of course it helps to have the foreign service exclusively name and recognize such people doing public diplomacy, such as yours truly, I think many young people are doing this very effectively already.
-
So you already have a tribe.
-
Yeah, indeed. Because I am Taiwan’s cyber ambassador and not really posted in any particular country. And when I attended, for example, the MozFest in Barcelona, people already saw ourselves as a cross-national tribe. While taking care of our local communities, we also connect to the open mobilization tribe worldwide. But this tribe is interesting in that you do not need any accreditation to join. All you need is an email account. And then you can be part of this open movement.
-
Is there anything else that you’d like to talk about that we haven’t mentioned?
-
Well, I can read you some poetry. I wrote a job description in 2016 when I first became Taiwan’s Digital Minister. “Digital” in Taiwan also means “plural.” So it was also the Minister for Plurality. And so it’s a very short poem. I can recite it for you if that’s okay.
-
Yes, please do.
-
Just like this: “When we see the Internet of Things, let’s make it an Internet of Beings. When we see virtual reality, let’s make it a shared reality. When we see machine learning, let’s make it collaborative learning. When we see user experience, let’s make it about human experience. And whenever we hear that the singularity is near, let us always remember: the plurality is here.”
-
I love that. Did you write that?
-
Yes. I’m a “poetician,” as it were.
-
A poet-tician. I’m very jealous of your numerous job titles and hats. When did you write it?
-
In 2016. I was in New Zealand. The news came when I was listening to some Maori chanting, and as you probably know, the Taiwan people—the Austronesian people on the East of Taiwan—share ancestors with the Māori culture all the way through Polynesia. So I felt a kind of sense of calling. And so when the news came that I was going to be Digital Minister, I just wrote that job description.
-
And it’s stood the test of time. It’s still topical today, isn’t it?
-
Really it is. Today, I think we’ll say that we, the people, are already the superintelligence we’ve been waiting for. We don’t need another machine supreme intelligence to rule over life. We can just have the power of life, not the power over life.
-
So do you think this current obsession with AGI and ASI — Artificial General Intelligence and Artificial Superintelligence — is misplaced? We don’t need it? We’ve already got it?
-
Yeah. I think it’s a lack of imagination, to be very honest.
-
Why?
-
Well, because if you do not have good community spirit, civic care, and civic muscle, then it may be interesting to think of a machine in the sky that sees everything, solves everything, and so on. However, for most people who practice art, creativity, journalism, and so on, we understand that the meaning is in the relationships that we build through our work. It is not just the output of those works. To confuse the output of the work with the relational care is to make a categorical error between utilities and the ethics of care. And so if we have better moral imaginations, I think it would be quite obvious that we want assistive intelligence to strengthen the human-to-human relationship, and we would not sacrifice this human relationship and fall prey to synthetic intimacy, to isolation and polarization, just because we want to maximize some number, like GDP or maximizing the attention spent on touchscreens.
-
What’s your view on humanoid robots? Are we all gonna have them?
-
Well, I mean, humanoid robots are easier to introduce in domestic settings because most domestic settings are designed for something that looks like a human. On the other hand, I’ve seen some Lego-shaped robots that are clearly assistive intelligence. They’re not trying to trick us by developing synthetic intimacy. I think that’s a lot more healthy than something that is right in the middle of the “uncanny valley.” And so I think the capabilities are going to be there soon. But a social contract around what to introduce into our daily relationship—that really needs to be a societal conversation. I’m happy that the UK, the AI Security Institute, has a societal resilience unit designed to answer such questions. And I’m happy to help also.
-
Because robots don’t really need heads, do they?
-
Well, if they’re going to operate dishwashers, then they do.
-
You’re doing some work with California?
-
Yes, with Gavin Newsom, the Governor, and his First Partner, Jennifer. For the past couple of years, we built the “Engaged California” platform and it has been successfully used to crowdsource ideas around wildfire prevention and mitigation following the Los Angeles fires, with interested policy survivors participating. And it is now being used to modernize the workforce in the California state government, by asking the people in the state government what to do with AI, and how to introduce AI in a way that they feel comfortable. So it’s like government efficiency—like D.O.G.E.—but not top-down.
-
Audrey, you have such a busy life. You’re getting a lot of air miles. You’re in different countries every week. How do you relax?
-
Well, I use an app called Timeshifter, so I don’t have jet lag at all. I have a “jet boost.” Every time I enter a time zone, because I enter my flight number into the app, it advises me when to drink coffee, when to wear sunglasses, and so on. So by the hour that I land, I get an extra boost of energy. That is how I relax: by going on long-distance flights.
-
And that works, does it?
-
It really works.
-
That’s amazing. I’m gonna download that. Thank you ever so much for your time.
-
Amazing. Thank you. I really enjoyed this conversation. Live long and prosper.