• Hey, good to see you.

  • Audrey, good to see you. All right. Let me start my local recording. I am now recording. Good.

  • OK, great. All right. Well, I’m going to do this semi-informally, semi-formally.

  • I wanted to state, at least Audrey, just to restate the intention that we had and where we were so excited to interview you here, which is in my mind, what would happen with this is that we’re recording a sort of a one hour blueprint for how you upgrade from your Windows 95 democracy that has all these new vulnerabilities because AI was released. And that’s like the NSA hacker tool that broke the democracy. What’s the upgrade plan that all these democracies can go from Windows 95 to the latest thing?

  • So I’ll say a more polished version of that. But when I just think about what we’re trying to accomplish together, when you think about like it’s less, we’ll interview you. But it’s also like we’re on the same side of the table brainstorming. What is a blueprint for this upgrade plan? And you’ve got the best living example of this upgrade plan.

  • Welcome to Your Undivided Attention. Aza and I are so excited to have with us today, digital minister of Taiwan, Audrey Tang. And the reason that we wanted to have Audrey on is when we think about what will it take for AI to go well with humanity?

  • We think about, well, what is our democracy? What are our democracies like in the world today? And they’re kind of like this old software platform, like imagine your computers running Windows 95. Windows 95 was great for a long time. And then suddenly someone releases some new cyber hacking tool, and suddenly every Windows 95 computer in the world is vulnerable.

  • Well, what that vulnerability is for Windows, AI is for democracies because democracies are suddenly super vulnerable to how AI can generate misinformation. They can find loopholes in law. They can generate new cyber exploits. They can do all these things that leave this democracy that we’re all living in vulnerable.

  • And the reason we wanted to have Audrey on is she’s the best, I think, living example of what would it take to upgrade from our 18th century democracies to some kind of 21st century democracy that is resilient to AI that is no longer vulnerable. So, Audrey, we’re so excited to have you. Welcome to Your Undivided Attention.

  • Good local time, everyone, and happy to be back.

  • I think we’re going to say the same thing, Tristan, but maybe we can start by walking through sort of a table of contents, a map of the territory for why is it and where is it that open societies are more vulnerable to generative AI than closed societies.

  • And then from there, we can sort of choose to zoom in on different areas. Let’s get the lay of the land and that’ll be the table of contents for today’s conversation.

  • Okay. Do you want me to do it?

  • Yeah, sure. That’d be great.

  • Tristan kind of just did that, right?

  • Oh, yeah, sure. Maybe as you see the threat landscape for how are democracies vulnerable to the threats posed by AI?

  • In Taiwan, we just saw a very successful result of the January 13 election, the first in a year where around 70% of the democratic world will have national elections. For the U.S., it’s November.

  • And we have seen that the authoritarians who wants to hijack, to hack or attack our democracy have many ways to do so. They range from the cybersecurity exploits, denial of service attacks that Tristan just alluded to, all the way to deepfakes, to sowing discord through the fake videos — of accused election fraud or polarization — by analyzing what kind of topics, what kind of keywords will drive people apart from each other, and just capitalize on that.

  • So there are many, many ways our liberal democracies are designed to allow a free flow of information, a market of ideas and so on. But without an upgrade toward collaborative diversity, just diversity alone does not counter this kind of hyper focused attacks of persuasion.

  • Maybe just quickly, before we get into the rest, can we define… what was the term you use that I don’t think people knew. Really quickly, what’s a denial of service attack?

  • A denial of service attack is like keep calling into a telephone line to keep it busy, except it’s for websites and other public services.

  • So with AI, it is now possible to simulate human-like behavior so that the existing CAPTCHA, the system that tells robots from humans are no longer robust enough to tell whether those millions of connections are from humans or robots.

  • And when a system — that’s not designed to be resilient against denial of service attacks — face this kind of million of human-like requests, that they quickly grind to a halt.

  • So then, a lot of people listening to this, I think can hear this as like, oh my god, I suddenly feel like my society is way more vulnerable than I thought it was.

  • Because AI can generate automated lobbying, automated robocalls. I believe in New Hampshire, there is a fake deepfaking of President Biden telling voters not to vote. This just happened in the news recently.

  • And so, it suddenly feels like every lock, or every sort of security that we had, and how our society worked is coming off. And obviously, that’s like the place that I think we left our listeners, when we did the AI dilemma talk.

  • But you, Audrey, have done an upgrade plan. If you spend this many billions a year, at you do all these upgrades across your information system, your cyber security, your trust security, your polarization security.

  • What I would love for you to do, is walk through some of the things that you’ve done, maybe through the vehicle of how you secured your last election — which just happened and congratulations, your party won.

  • All three parties won. Everybody won.

  • Well, technically all three parties won, but you got the primary seat there. So, yeah, do you want to walk us through how did you… Yeah, go ahead.

  • Just before you do there I just wanted to, I think, to say denial of service attack. Another way that sort of generalizes well, is that it’s a kind of using generative AI, to cause institutional overwhelm, and it’s the institutional overwhelm that I think is the concept that we have to protect against.

  • I think it will give listeners a framework for understanding. Like, how do we overcome the institutional overwhelm, sort of like with Pentera, just quickly ran an AI to find loopholes in law. I just want to build that out, just for a second, to give a sense of a surprising way, that we get institutional overwhelm, which is that you know you can imagine law to be sort of like a kind of code, like a computer code that lawyers speak.

  • And so, just like you can use AI to discover vulnerabilities in computer code, you can use AI to discover vulnerabilities in law, find me all the loopholes that nobody has discovered before, find me paths of arguments that let me repeal indefinitely, so you know justice delayed is justice denied, that kind of thing. That’s called institutional overwhelm.

  • This is great. Okay, so do I go back to your question of how the Taiwan experience showed that this could be upgraded?

  • So in 2022, I remember filming a deepfake video of myself with our Board of Science and Technology. This is called “pre-bunking.” Before the deepfake capabilities falls into the hand of authoritarians, a skill that would not happen until, say, last year.

  • Already two years ago, we pre-bunked this deepfake scenario, by filming myself being deepfaked and also showing everybody how easy it is to do so in a MacBook, and how it will be easy to do so, on everybody’s mobile phone and so on.

  • With this pre-bunking, the main message is that even if it is interactive, without some sort of independent source, without some sort of what we call provenance, which is a digital signature of some sort, do not trust any video just because it looks like somebody you trust or a celebrity.

  • Now, pre-bunking takes some time to take effect, so we repeated that message throughout 2022 and 2023. But the upshot is that in 2024, when we do see deepfake videos during our election campaign season, they did not have much effect, because for two years people already have built antibodies or inoculations in their mind.

  • That’s a great, great level of example. So I love that example, because you’re pointing to both the need to understand the threat — in cybersecurity you let the defenders know ahead of time, so that you build up the antibodies, you try to patch your system, before the attack actually gets used.

  • Then I also heard you say, to give an example of what people might do to attack trust, and then develop antibodies.

  • Sure. Okay, do we move on to other threads?

  • Maybe, maybe just to ground, let’s move to the other one. Maybe just to ground it, I think the principle that right now, the compute power, going into AI evolution.

  • So think about you have a virus, and you have an immune system, and the amount of compute power going into the evolution is AI, which is generating all these new kinds of threats and mutating every day. There’s like billions of dollars of venture capital, open source, collective intelligence people, inventing new forms and ways that this AI can work, so that the virus is evolving super fast.

  • And democracies or governance of societies are like the immune system, and they don’t have hundreds of billions of dollars of venture capital and open source of, you know, governance hacking to make your immune system conventionally stronger.

  • So I think the principle that you’re going to be speaking to, throughout these answers, Audrey, is how do we make the immune system computation stronger than the computation of the virus evolution.

  • And before we jump into there, I’m sorry because it’s like such a good question is, I could imagine people in our audience, listening to the example you gave Audrey of, like alright, we need to pre-bunk by showing people how deep fakes work, and I could imagine the audience being like, there’s two problems with that.

  • One, you’re sort of teaching all the adversaries, like how to actually make deep fakes, aren’t you accelerating the problem…

  • Oh, they already knew it anyway.

  • That’s true, they do already know.

  • But I think it’s the deeper point is, if you don’t already have a system that lets you do verification and content provenance, then you don’t actually leave people with anything to do, except to doubt everything.

  • So I’m curious, like your philosophy there, and then how you go about doing that large scale upgrading.

  • That’s a great question.

  • So, in terms of information manipulation, we talk about three layers: Actor, like who is doing this? Behavior, are they millions of coordinated inauthentic behavior, or are they just one single actor? Content, whether the content looks fake or true?

  • The pre-bunking basically said that by content alone, one can never tell, and so it’s asking people to shift left, to tell whether it is trustworthy by its behavior or by its actor.

  • Starting this year, all the governmental SMS, short messages from the electricity company, from the water company, from really everything, goes from this single number, 111. So when you receive an SMS, whether it is the AI deliberation survey asking you to participate — we’ll talk about that later — or just to remind you of your water utility bill, it all comes from this single number, 111.

  • And because this number is not forgeable — in Taiwan, normally when you get a SMS number, it’s 10 digits long; if it’s oversea, it’s even longer — so one can very simply tell by the sender that this comes from a trusted source. This is like a blue check mark. And already, our telecom companies, the banks, and so on, are also shifting to their own short codes.

  • And so, it creates two classes of senders: One is unforgeable and guaranteed to be trustworthy. And the other class, you need to basically meet face to face, and add to your address book, before confirming that it is actually belonging to a person.

  • Awesome. And so then, that sort of heads back into Tristan’s point or question. Tristan, you’re saying the general principle is the immune system of the body of democracy has to be stronger than the new viruses.

  • How do you do that, and how do you think about it? How do you outspend when you have billions of dollars coming in from VCs into just proliferating power?

  • Okay. So the billions of dollars coming from VCs to proliferate power, also gave everyone in what we call the social sector or the civil society — in charge of also upgrading the democracy’s immune system — new sets of tools, for example, to fine tune, to train their own language models.

  • Previously, it was very, very expensive if you have to train a language model from scratch. But now, if you’re a fact checking organization, a crowdsourced one, like in Taiwan we have Cofacts, collaborative fact checking. Everybody can flag a message as possibly scam or spam, even on their end-to-end encrypted chat channels, just by forwarding it to the bot, or inviting the bot — which is open source — to your chat group.

  • And so, what it does is that it has a real-time sampling of which information packages are going viral. Some of them are information manipulation, some of them are factually true. But nonetheless, we have a real-time map of what’s going viral at this very moment. And by crowdsourcing the fact checking, think Wikipedia just in real time, we now have not just the package of information, but also the pre-bunking and debunking that goes with it.

  • And with newer methods of training language models, like “direct preference optimization,” which is taking a language model, showing it a set of approved answers and a set of rejected answers, it figures out the logic of what’s approved and what’s rejected. And even newer methods like “SPIN” — just show it the way that the fact checkers do their painstaking work, and it just learns from that train of thought.

  • Using these ways, our civil society has been able to train a language model that provides basically zero-day responses to zero-day viral disinformation, before any fact checker can look at any viral message. If you go to the Cofacts website, already their language model provided this kind of critical thinking, put it in perspective, and so on, and to link the most possible related pre-bunking material to this newly-emerged variant.

  • Okay, so this is great. Let’s make sure we’re breaking this down for listeners so that everyone’s understanding.

  • So I heard you say a few things. One is that, okay, we have hundreds of billions of dollars in venture capital training these, you know, large language models and AI systems.

  • What if, instead of saying that’s just the virus in my previous analogy, what if we said that virus we can, the immune system can use the fact that we built this virus, for strengthening the immune system. So we’re actually channeling the better and better language models and AI that’s coming out, into aiding the immune system — at more quickly generating pre-bunking responses, more quickly generating fact checks.

  • Because if you can train it so instead of having to pay a human being to do a lot of work, human-manual and human-cognitive labor, to generate what should be in the fact check, we can make it generate it really quickly.

  • And then I heard you also say, just like open source AI development, is almost like expanding the number of people who are expanding the virus, but you’re also saying, I’m going to do open source collective crowd sensing.

  • What is it called? Crowdsourcing of facts and responses, and notice of those things, you’re basically expanding the collective compute of the immune system, to try to match the collective compute of how much the virus is evolving, is that right?

  • That’s entirely right. And I would also say that because at any given time, a person’s cognitive bandwidth, like the amount of information we can receive from our, hopefully not touch screens, from our screens, is limited.

  • At any given day, there’s only this many viral messages, because they also compete for limited bandwidth. So it is not like the defenders have to build, like pound-for-pound, the same compute, because most of the generated disinformation, information manipulation, would not win the cognitive bandwidth contest for the news of the day anyway.

  • So we only really have to focus on the three or four things every day that is really going viral.

  • Yeah. So, I guess I have a question about that. I mean, isn’t it possible that AI will enable a much more diverse set of future deep fake categories and channels, so I could say what are the 10 memetic tribes are, like political tribes that I want to attack, use to generate 10 different kinds of deep fakes to 10 different kinds of tribes. And there’s different little realities, like little micro-realities for each of them.

  • And so we might have previously live in a world where, you know, most of the world’s attention in your country is like on a handful of channels, and a handful of stories are going viral. But now, we can start to do a much bigger set of diverse things and takes. Let’s expand the the horizontality of the defenses now.

  • Yes, it does enable a new kind of precision persuasion attacks that does not rely on the “share” or “repost” buttons. Instead, it just relies on direct messaging, basically, and talks to very individualized people with a model of their preferences.

  • On the other hand, the same technology can also be used to enhance deliberative polling, where you call a random sample of say 4,000 people or 10,000 people and ask them the same set of questions to get their preferences.

  • It is used during elections, of course, but also during policy making. What polling did not do previously is to allow the people picking up the phone to set an agenda, to speak their mind, to show their preferences, and to let us, the policymakers, know what is the current fears and doubts and also personal anecdotes that may point to solutions by each and every individual.

  • So we’re also investing in deliberative polling technology that use precisely the same kind of language model analysis tools that you just talked about, but not to calm people, not to scam people, but to truly show people’s preferences.

  • Then, when we pair the people who volunteer to engage in this kind of face-to-face or online conversations, into groups of 10 people each. We ensure that the group of 10 has the diversity of perspectives, and a sufficient number of bridging perspectives, that can bring everybody together to someplace where people can live with it — Good enough consensus.

  • And so if we do this at scale, we are no longer limited by the amount of human facilitators, which is very important and very treasured, but cannot simply scale to tens of thousands of concurrent conversations.

  • And then, we can get a much better picture of how to bring people together.

  • I feel like this is a good opportunity for you to jump in, I feel like we should. This is great, Audrey, I think we just need to like explain it well, for each of these things.

  • There’s just a lot in it, and I think we should just explain it a little bit about maybe. And, Aza, I feel like you have the best expertise to try to do this one, if you want to try.

  • Yeah. And the reason I’m sitting back is I’m just trying to think about the overall roadmap, because we just sort of, just to name it, we sort of skimmed over the synthetic relationships point, which is that there’s going to be a flood of essentially counterfeit human beings.

  • It’s not just mis-information; it’s like mis-people and dis-people; mis-relationship and dis-relationship.

  • Yeah, exactly. Like you’re going to find somebody on Tinder or Reddit, or whatever forum you’re on, and you’re going to get into a conversation with them. And then they’re going to send you selfies and little videos of them going on trips and you’re going to form a long term relationship.

  • And it turns out that person’s never existed and was there to form a relationship, because relationships are the most transformative technology people have. And so there’s a new attack vector against populations, where loneliness becomes the largest perhaps national security threat.

  • I feel like we just sort of like quickly went over the top of that. So, it’d be great to like pause there for a second, and then move.

  • Yeah, of course. Yeah, I can offer a bridge. Yes.

  • So it is true that point-to-point persuasion, almost like spear-phishing, right, in cyberattack, it’s micro social engineering, is a new vector of attack.

  • First of all, I think pre-bunking also works on this, in that if you let people know that there is this kind of attack going on, they will be wary of it.

  • Second, I think instead of point-to-point relationships, people want to belong in communities. Some people have the communities that they worship together, or they practice art together, or to do podcasts together, and so on. And so in such communities, generative AI can also help to find the potential people that may want to form a community with you, instead of just satisfying all your preferences, catering to your every need, and so on.

  • It’s the other way around. It is showing each and every individual the thing that they care about, there’s some existing community that can move all the way to face-to-face or post-symbolic communications that feels that we’re actually really there, in the same room, and so on, and then lead to much more meaningful ties among multiple people.

  • So when people enjoy that kind of community relationships, including actually participating in fact-checking communities, it is much, much more meaningful than an individual-to-individual companion that may cater to your need, but does not connect you to more human beings.

  • So just to make sure I think for listeners to get this, so it’s like, okay, so people are vulnerable to a one-on-one attack of one person, creating a fake relationship to influence another.

  • But then, what you’re saying is, well, what if we group people together into communities where they feel this deep care and this deep reflection of their values, which is what your vTaiwan and deliberate polling type systems and structures do, is they invite people into seeing the people who agree with them, not just agree with them on some, like, hyper-polarized outrage thing for example.

  • But who agree with them about this more bridging consensus, like we all agree about this bridging statement. These are the things we all agree on. So when people feel membership or belonging in that thing that I care about that is the bridging sentiment and I’m bringing people together, and then you’re saying, what if we use the tech to physically bring people into the same embodied room?

  • We’re trying to fight both the loneliness national security threat, and fight the individualized sort of outrage-bait information, with this bridging information.

  • That is right. And when I say post-symbolic, I mean any medium that has non-verbal components in it.

  • Case in point, on Threads.net, I’ve been just recording those short snippets randomly, to people who mention my name, saying good night, reading poetry, things like that. And it’s not in the content of my speech, but rather in the care and the relationship that it fosters.

  • It reminds me of the author Johann Hari’s line — “The opposite of addiction isn’t sobriety, it’s connection.”

  • And the same thing is the political philosopher Hannah Arendt’s point that totalitarianism stems fundamentally from loneliness, and so what I’m hearing you saying is that there is a solution not just to better voting.

  • I think you call it the bandwidth problem, or the bit rate problem, that right now humans just give one bit of information every four years to decide which way the country goes, and we could be doing it at a much higher bandwidth or bit rate.

  • But it’s not just a solution to that, it’s a synergistic satisfier, it does more than one thing at the same time. It also brings people together in this deliberate polling, actually puts people face to face into community to work through problems, and so we get sort of a two-for-one.

  • Yes, it builds both longer-term relationships, not just transactions, and also it deepens the connection. So it’s with more diverse set of people, but also deeper.

  • Previously, without this generation of technology, it’s always thought as a trade-off. If you want to build some kind of relationship with billions of people, well, you know, market transactions is the best you can do, right?

  • And if you want to build deeper emotional connection, it’s maybe with 150 people and nothing more. But now we can have both. That’s the idea.

  • I didn’t quite understand the threads poetry thing, I just missed that.

  • Sure. Yeah, Threads.net is this social media thing that I think have integrated some new insights about civic integrity, right? So it bridges you to people who do not share your view, but can be bridged over common connections. You can consciously hide the numbers of likes and shares.

  • So, there’s a lot of design that went into it that imbued the insights over the past decade or so of anti-social media harms. And so I found that Threads.net is a pretty good vehicle for me just to wake up and to look at people who want to share something with me.

  • And I have now taken to just improvise and say something to them. And so voice-based conversation is now used to convey not just the content of the voice, but rather the here and now, the co-presence of it.

  • So people feeling sleepy together, people reacting to a sudden chill to a common weather and things like that. And that builds communities.

  • And I always make sure to make our presence known. So like if I’m about to be on high-speed rails to move to Tainan, I’m in Tainan now two nights a week, I make this fact known — and then we talk about things common to the Tainan community and so on.

  • We might just pause here. I think we need to redirect the conversation, a little bit, between the China, that Taiwan has with China, that the US has, or do you want to go there?

  • I’d like to go quickly to question three, to some of these are about just more specific, about how AI particularly impacts Audrey’s vision for digital democracy, and what the challenges are since the proliferation of large language models, did we get to that?

  • Yeah, that was the premise is that what large language models do to democracies, and then we enumerated this.

  • Talking about, and then her first answer was about how do you use large language models to strengthen the immune system, so it’s not that large language models are the virus. It’s the immune system can use larger models to strengthen itself.

  • Yeah, I’m happy to move to any segment.

  • I mean, as a listener, just to jump in, like I have no idea at this point. We probably probably shouldn’t assume that, unfortunately, although our listeners are very educated, we shouldn’t assume that they know everything about the China-Taiwan relationship, and the threats that you experienced to democracy in the most recent election.

  • Okay. I can paint a scary picture first.

  • I think that comparing and contrasting that with, or just a realistic one is, and then comparing contrasting that with what the US is facing, may not be very many significant comparisons at all, but it would help bring the stakes home and cause listeners to then kind of be on board, for the more technical aspects of the conversation.

  • Okay. Sure. So in 2022, August, just before my ministry started, the US House Speaker Nancy Pelosi visited Taiwan. And on that week, we have seen how cyberattack along with information manipulation truly work from PRC against Taiwan.

  • Of course, every day we already face millions of attempts of cyber attacks. But on that day, we’ve suffered more than 23 times compared to the previous peak. So immense amount of denial of service attacks that overwhelmed not just the websites of our ministries of national defense, or the President’s office website, the Ministry of Transportation also saw that the Taiwan railway station has their signboards, the commercial signboards outside of rail stations compromised, replaced with hate messages against Pelosi.

  • Not only that, but also the private sector, the convenience stores signboard were also hacked to display hateful messages. And when journalists want to check what’s actually going on, was it really true they’ve taken over the Taiwan rails? They didn’t, but rumors is they did.

  • Well, they found that the central news agency, the websites of ministries and so on, very slow to respond. And that only fueled the rumor, the panic. And concurrently, of course, missiles flew over our head, that’s another thing altogether.

  • So the upshot is that, each and every of those attack vectors contribute to the amplification of other attack vectors. The goal, the strategic goal of the attackers, of course, is to make the Taiwan stock market crash and to show the Taiwanese people, right? It’s not a good idea to deepen relationship with the US.

  • First, it didn’t work. We very quickly responded to the cyber attacks. I came out and say, “You know, our website of our new ministry just went online the same hour as the missiles flew over. It’s not down for even one second. Come in, attack us, because it’s using IPFS, the Bored Ape Yacht Club NFT storage.”

  • Anyway, so it’s somewhat humorous. And it also educated the public about the difference between dialing to keep a line busy, versus actually taking over the Taiwan rail station. It’s not the same thing. And so that really worked. People did not panic. The stock market actually rised that day.

  • And we very quickly reconfigured our defenses against this kind of coordinated attack. All in all, the battlefield is in our own mind. It is not in any particular cyber system, which could be fixed and patched and so on. But if they create the kind of fear, uncertainty, and doubt that polarizes the society and make part of the society blame the other side of society for causing this kind of chaos, then that leaves a wound that is difficult to heal.

  • And so, we’ve been mostly working on bridging those polarizations. And I’m really happy to report that after our election this January, all three major parties, their supporters feel that they have won some, and there’s actually less polarization compared to before the election.

  • So we overcame not just the threat of polarization or precision persuasion of turning our people against each other, but also we used this experience to build tighter connections, like a shared peak experience that brought us together.

  • That’s an amazing example, I think that’s an incredible example. First, it’s incredible work that you’ve done, Audrey, that you have crafted these responses to what could have been a real panic-inducing attack.

  • You’re speaking to something that we’re trying to get ahead of for other democracies, because here we are in the United States, what’s to stop China or Russia from doing the simultaneous denial of service attack, hit the colonial pipelines, hit gas stations, ATMs, banks, overwhelmingly combined deep faking information attack? And that’s kind of the fear — the fear with AI — is these democracies are now super vulnerable to these combined attacks that AI also accelerates and enables.

  • What you’re providing here is an answer, or a proto-blueprint for the response that democracies could have, that left them off stronger rather than weaker to AI. And that’s why we’re doing this episode — that’s why we wanted you on. This is just really great.

  • Another place to go from here is, you know, in a conversation we’ve had previously, you talked about like what’s required on the cybersecurity side, and I think you were talking about, you have to assume that your systems have already been hacked and so I just love for you to like, assume breach. So please talk through through that, and how democracy should think about cybersecurity.

  • So this is so important I want to make sure listeners get this because what you’re also saying here is just like you’re pre-bunking, the kind of information threats that come in; I think you should explain Pentera, you’re basically pre-calculating what are all the computer attacks that they can do in the cyber realm.

  • And then, how do I defend against all of them, so it’s like, what are all the viruses people could make and then let me pre engineer all the vaccines for all those potential viruses, so that I’ve basically defended against them.

  • And the second thing I heard you say is redundancy, that if one system fails you’re designing of two other backup systems that will automatically switch over to. So, resilience, it’s a principle of variety and diversity, so that your best defense that creates that resilience is having multiple backup systems.

  • And that’s a theme throughout your work that I want people to see, is that these are general strategies for how do you make sure the immune system is stronger than these supercharged viruses, and you’re presenting some principles for that.

  • Yes, this is a strategy of Plurality, or collaborative diversity. You can read all about it at Plurality.net.

  • Do we need more explanation on this? I just want to make sure before we move on, because a lot of people don’t know.

  • Yeah, we’re gonna run out of time, so let’s move on.

  • Yeah, other topics?

  • So, I know we do want to talk about why Taiwan is a target of all of this in the first place, and some interesting facts about Taiwan’s relevance in the world stage, including the fact that it manufactures the world chips for AI. We could do that, or I want to make sure we’re just quickly getting some wayfinding, if we want to do something different.

  • Because ideally I would like people to point this episode, shared around in every democracy in the world is like, how do I get the Audrey in for my democracy I should do these 20 things. And we’re giving people a hint of it.

  • Yeah, and I feel like we only, I’m not sure which way to go, so I’ll put into the shared space, which is we touched on deliberative polling, but deliberative polling I think in most people’s heads, draws no image, they just like they see they see a blank thing it sounds sort of boring.

  • And there’s a motivation, which is, I remember is actually originally saw this from the constitutional or Larry Lessig where he showed work from Martin Gillins and Benjamin Page. And it’s about the disenfranchisement like why do people think democracy doesn’t work. And there is this incredible set of graphs, where they show like average citizens preferences to what policies get passed, and there is no correlation. Everyone’s average citizens preferences make no change to the agenda of what government cares about.

  • But if you look at economic elites preferences or interest group preferences that does correlate that is if an interest group or an economic elite wants something there is a correlation, it’ll get on the agenda and it may come become policy.

  • And of course, you’re going to get disenfranchised and people can say democracy doesn’t work. And what you do and I think your work is so brilliant here is you find ways which are robust to get everyday citizens preferences to set agendas, and then find policy solutions that work and bridge across unlikely groups.

  • So I think building that out for people, giving like a specific example for how it goes end to end, I think would be really powerful.

  • Sure. So the first time we’ve used collective intelligence systems on a national issue was in 2015, when Uber first entered Taiwan. There were protests and everything, just like in other countries. But very differently, we asked the Uber drivers, the taxi drivers, the passengers, and everyone really, to go to this online pro-social media called Polis.

  • And the difference of that social media is that instead of highlighting the most clickbait, the most polarizing, most sensational views, it only surfaced the views that bridges across differences. So for example, when somebody says, oh, I think search pricing is great, but not when it undercut existing meters. This is a nuance.

  • And with nuanced statements like this, usually in other antisocial social media, that just gets scroll through because it costs more, right, in terms of mental bandwidth to process that. But Polis makes sure that it’s up and front.

  • The same algorithm that powers Polis would eventually find its way into community notes, kind of like a jury moderation system for Twitter, nowadays X.com. And so because it’s open source, everybody can audit to see that their voice is actually being represented in a way that is proportional to how much bridging potential it has.

  • And also, it gives policy makers a complete survey of what are the middle of the road solutions that will leave everybody happier. And much to our surprise, most people agree with most of their neighbors on most of the points, most of the time. It is only that one or two most polarized points that people keep spending calories on.

  • But if we just say, you know, let’s make sure that rural places, the co-ops, the unions, and so on can also enjoy their Uber-like dispatch apps. If we make sure that insurance is taken care of, if we make sure that it’s clearly labeled in an app, that the search pricing never undercuts existing meters, and so on, then everybody’s actually happy with it.

  • So for many years now, Uber is a legal taxi fleet in Taiwan, but many existing fleets are also upgraded to be Uber-like, so it’s a win-win-win situation. Now, because of that peak experience, we’ve applied this method also to tune AIs.

  • Working with the Collective Intelligence Project, we worked with Anthropic, with OpenAI, with Creative Commons, with GovLab, and many other partners. In fact, Anthropic even showed that the constitution, the principle that drove their AI, written by their top researchers, pales in comparison with people’s AI, with just people participating in a deliberative poll using Polis.

  • They can just say, “Okay, I think AI should behave in this way, in that way, and so on,” and upvote and downvote each other’s sentiments. And the resulting matrix, when we use that to train Claude, that’s Anthropic’s AI, it is as powerful as Anthropic’s original version, but much more fair and much less discriminatory.

  • There’s so many things in there…

  • Can I ask a clarifying question? You know, I wonder about and I’m sure listeners will be wondering about how many people actually doing this?

  • What’s the percentage of people using vTaiwan? What are the limitations? What’s the buy in from older groups and disenfranchised groups? How many people will participate in these things? It seems like there might be selection bias in the kinds of people who are going to actually be involved.

  • Want me to tee that up, Sasha? I’ll just say that. So, Audrey, how do you get over the selection bias effects that there’s going to be certain kinds of users, maybe more Internet centric, digital centric, digital native users who are going to use this, but then it leaves out the rest. And how do you deal with the problem?

  • In Taiwan, broadband is a human right, and broadband connectivity is extremely affordable, even in the most remote places. For $15 a month, you get to access unlimited bandwidth. And because of that, we leave no one behind.

  • When we do deliberative polling, it is a randomized SMS scan. So I guess it does increase your chance if you own five phones to be selected, but it is relatively minor.

  • And so we just randomly send, using the trusted number 111, to thousands and tens of thousands of people’s SMS. And the people who find some time to answer a survey or just to listen to a call can just speak their mind and contribute to the collective intelligence.

  • So while, of course, this is not 100% accessible, there still are people who need, for example, sign language translation — which we’re also working on — and translation to our other 20 national languages. I think this is a pretty good first try, and we feel good about the statistical representativeness.

  • And because the tools are open source, if you’re a community organizer, and if you have already had hours and hours of focus group conversations, surveys, ethnographic interviews, or whatever, you can just go to talk to the city and use their open source methodology to turn your Vimeo or YouTube playlist into this kind of collective intelligence, completely free of charge.

  • I’m confused. You know, how do you not have 12 year olds contributing? Is this only voting age people who you are sending this message to? Doesn’t that skew the results a little bit, if you have a 6 year olds doing the survey?

  • The message is sent to all members, so it may be 6 years olds receiving those, or 12 years old.

  • I mean, they’re going to live in the future for a longer time, so I think they should have disproportionately more voice.

  • Okay. No, that’s good. They might not understand the issues, though. But can we get… Because we’re running out of time. I just really want to get Aza and Tristan. You have a lot of concerns about the safety of open-source AI, and yet what Audrey is discussing really rests on using.

  • Yeah, I really have another hour to spare if you do, because I see now you’re moving to a topic that five minutes wouldn’t do justice.

  • Okay, well, can we get into it but I just want to really emphasize that we need to talk about this, which is intelligible to a large group of people, and we’re still kind of trying to upskill our audience in what open-source is. So anytime you mention open-source, I think we need to kind of Tristan and Aza just sort of really signpost, but hold people’s hands through that conversation. And Audrey, I’m interested in you knowing if there are dangers with open-source. Where do you think the line is if we can touch on that?

  • So before we do that, I just want to make sure if we do have more time, so are we all here for at least? Because I don’t know if you were ready for that, but we can all do another. I can do half an hour to an hour. excellent. I was going to say I think there’s more meat. Each of these areas is huge and we’re kind of plowing through because we we need to get stuff. But if we can I think it’s going to be really helpful if we can spend the time.

  • So we might want to do the critique, though, question about you brought up one critique, Sasha, that how do you get a requisite variety of citizens? Isn’t it just the digital ones? And we got an answer from Audrey on that.

  • There is another critique. And I was wondering, Audrey, if you if you wanted to steel the critiques that have been made about vTaiwan. I know that there was a former Taiwanese legislator, Jason Hsu, who said the platform, you know, although I think there’s many rewards of it and endorses, it said the platform hasn’t been used for a major decision since 2018.

  • Because he says since the government is not mandated to adopt recommendations coming from vTaiwan, legislators don’t take it seriously. What is your response to to that critique?

  • Yeah, vTaiwan inspired many things, one of which is the Join platform, the national participation platform. Along with the Join platform, we instilled an e-petitioning system, where 5,000 people joined in electronic countersignature can interpellate, basically demanding an answer out of a minister, or if it’s cross ministry, then the team of participation officers arrange collaborative meetings and meet people where they are.

  • So there’s more than 100 successful collaborative meetings since Join.gov.tw started. Now admittedly, when you compare the legitimacy of vTaiwan, which is self-selected stakeholders, multi-stakeholder conversation, and Join, which always answers to at least 5,000 citizens, Join enjoys higher legitimacy to the career public service.

  • And so, after Join got really more popular, 2017-18, and also because vTaiwan depends on the face-to-face weekly meetings to set an agenda and to work out the platforms of each stakeholder, vTaiwan did not convene as much during the three years of pandemic.

  • In fact, many of the core vTaiwan contributors were also the contact tracing, mask rationing map, and everything that was so successful during the pandemic. So people got busy with other things.

  • Now that we’re after the pandemic, I’m happy to tell Jason Hsu that vTaiwan is back, and now proudly sponsored by OpenAI’s democratic input grant. And so vTaiwan now is moving to be a pioneering grant to use language models, not just to do fact-checking or pre-bunking as other g0v projects do, but also to build what they call recursive publics, communities that can use this kind of vTaiwan tools to put forth the terms they want with their surrounding communities in a kind of community self-determination.

  • Instead of asking legislators to pass laws, they can set the community code of conduct, rules, and so on on, for example, what kind of materials to include in the AI models that respond to the Taiwanese culture. This is a very multifaceted question, and the answers differ depending on if you talk to the people who care about copyright or people who care more about creativity and so on.

  • Because there’s no immediate national act to regulate that particular point, it gives the vTaiwan community much more room to explore. So I would say that vTaiwan is now moving into a much more lab-like, research-like role, while the participation officer, the Join platform, and the state-run deliberative polling and so on, is now well institutionalized.

  • So it was not like back in the Uber days when the career public service really had no idea how to run these things. Now we have a generation of career public servants who just got educated and trained on vTaiwan methods.

  • Got it. go for it, Tristan.

  • Just to make that one notch simpler, so the Join platform is a petition platform in which if five thousand people are demanding the same thing, they get the attention of the government to get facilitated and to be a part of a deliberate poll.

  • And then there’s also you were mentioning earlier that text message service where it will randomly text some selection of people when they get texted, did they land in vTaiwan or do they land in some other process?

  • They are the same institutional structure. There are the participation officers; they’re the government side. They’re done by career public service, whereas vTaiwan, like the Cofacts crowdsourcing project, remain in a civil society and funded by nonprofits and donations and so on.

  • Yeah, I see. And so the critique that vTaiwan hasn’t been used for that many major decisions since 2018, is referring to its previous like self selecting people opting into going there on their own, versus the new systems that are doing the mandatory polling, and the the five thousand people petitions.

  • The petition system is more traditional, for the career public service.

  • Got it. Great. Cool. I just think it’s it’s helpful because when I was diving into your, Audrey, the nuance and how you create spaces in which conversation happens, I think is is actually critical and deeply thought through.

  • For instance, there is no reply button in in your systems. And you’re like, OK, how do you have a conversation without a reply button? And instead it’s not that people are replying, it’s that when you see something you disagree with instead of getting into a flame war by arguing with them, you have to come up with another positive statement of what your beliefs actually are.

  • And then instead of people voting, which has all sorts of ways that can be attacked, you describe a system where every positive statement of value that you make sort of tells the system more and more about who you are and which set of tribes you belong to.

  • And then the statements that are bridging are the ones not that are voted for, but are the ones that sit across many different tribes and those tribes are segmented based on what people’s actual statements are about what they believe.

  • And so it is that set of statements that bridge in a non-voting sense, just based on people’s values, that set the agenda that say this is what we, a set of the population, believe that becomes what legislators have to and stakeholders have to respond to.

  • And then it’s no longer just citizens working to come up with policy, then it’s the stakeholders and the government and the experts working but beholden to the agenda set by people in that way.

  • Did I describe that correctly?

  • So Polis, the Join petition system, Community Notes on X.com – they all share this fundamental design that there is no reply button.

  • And through this bridging bonuses algorithm, we bring the bridging statements into more and more visibility so people can construct longer and longer bridges and bridge across higher and higher differences between people’s ideologies, tribes, experiences, and so on.

  • I mean, it’s mentally very, very difficult to bridge long distances. This is true for anyone. But just to explain an idea to somebody who has slightly less experience, well, that’s just sharing your knowledge, right? That kind of bridging everybody can do.

  • And so by visualizing which gaps still remain to be bridged, it turns it into a game almost to challenge the people with a knack of crossing bridges between left and right, those conservative progress that could be made, like Tristan and Aza, to do new novel ways to make things happen, that will put forth a statement that actually just makes sense between people across very wide ideological differences.

  • And so, this system that gamifies this bridge-making activity I think is very, very powerful and is at the core, regardless of which kind of space we choose to design.

  • And just to link this for listeners who know our work on social media, this is instead of rewarding “division entrepreneurs” who are identifying new creative ways to sow division and inflammation of cultural fault lines, this is rewarding the bridging and synthesis between these entrepreneurs.

  • And per our frequent referencing of Charlie Munger’s quote, “if you show me the incentives, I’ll show you the outcome.”

  • What I love about Audrey’s work is she’s about changing the incentives, so that we get to the different outcome. And for those who aren’t tracking the community notes feature on X, or Twitter, was actually created by a parallel sort of project called Polis.

  • Polis, that’s right. It’s a different implementation of the Polis paper.

  • Yeah. And for those who aren’t tracking the community notes feature which is live on Twitter that Elon is using, it was actually built before Elon got there, but it was by another collaborator of Audrey’s, Colin, who helped build Polis.

  • And so there’s a whole sort of suite of tools that are open source tools that are part of this democracy upgrade plan that include, you know, the petition system, the deliberative polling, Polis, no reply buttons.

  • And I think part of this episode is meant to point at this long list of solutions, these long list of projects that if you imagine every democracy in the world were to implement this whole suite, you get your kind of big upgrade from your vulnerable Windows 95 operating system that’s failing over in the age, that’s falling over in the age of AI, to this new like super robust 21st century Audrey enhanced — bulletproof democracy that’s sitting there and doing a really good job and outperforming other societies.

  • I think we should go to a place where it may look from the outside that we have a disagreement or divergent views which is around the question of open source AI.

  • In the previous episode Aza and I did a big episode on the dangers of open source AI and open source AI in your work has been key to actually strengthening democratic societies. But there’s also a difference between open source, AI development, and where you, I think, and I, and our, we all three share the concerns about frontier AI development that if we keep racing.

  • And these frontier models from GPT-4 to GPT-5 and from Claude-2 to Claude-3, all the way to super intelligence that we have some shared concerns there.

  • Could you give your view about what open source, your views about the risks of open source and the benefits, and where you think that both aligns with our views and maybe disagrees, and how that relates to the broader race for artificial general intelligence?

  • Do I assume your readers already know all these things, or do we stop and define open-source?

  • No, we’re going to need to define a little bit of open-source, as well as the difference between open-source and open weight.

  • I think that’s a really important distinction.

  • Yeah, we can also rerecord that version of that question in a separate session. But, yeah, do you want to just take a stab and then we can… Audrey can you show our way through this?

  • Sure. So, open source, when that term first started in the late 90s, means very specific things.

  • It means that the companies, or any group that produces software, is willing to let outside contributors in, and also contribute out to its ecosystem, so that people share the burden of maintaining the source code together, and also look for security vulnerabilities, look for portability improvements, many small things, in a way that large consumer-oriented proprietary software companies may just deprioritize and never get to — it’s the Wikipedia model applied to software development.

  • Now, AI, and generative AI by extension, has always been developed in this tradition, in open research. It is not until around GPT-3 do people start to think, “Actually, if we apply this tradition, as the models become more and more powerful, maybe it will first hack democracy, or hack the cybersecurity system, even before we enjoy the benefits of open source.”

  • And so comes the idea of red teaming, of finding the vulnerabilities before it’s released, instead of just publishing everything by default. And also comes the culture of setting a buffer, like if it hits this capability, even though we know it’s probably not empowering this kind of agentic or some extinction-scale risks, but we don’t actually know what the adversaries’ capabilities are. What if this is just the one missing puzzle that they lack?

  • So there’s this idea of buffering, of when we hit around one-sixth of the capability threshold that we think will cause harm, we just stop there and do a horizon scanning.

  • Actually, the deliberate polling is a kind of horizon scanning, because you can ask everyday citizens what are the threats, the scenarios that they’re anticipating.

  • Now, in AI development, I think what really works is to take a model that we know is small enough that it will not cause apocalypse, and then just tune it toward some person’s likings. So I’m working with, say, the Mixtral models, which is some quite small models called Mistral, and a mixture of experts.

  • I can tune those experts, depending on the private data that I have. Maybe one expert is great at answering emails, maybe one expert is great at doing podcast recordings, and so on, based on the materials that I have that I don’t necessarily share.

  • So when I fine-tune it toward my liking, to be more like me, basically, I’m not really increasing its general capability. I’m just making sure that it can save my time, and to be biased toward where I am, instead of biased against me. It feels less like a cultural colonization when I fine-tune it to my liking. And all this is very good.

  • So I don’t think the two concerns, one is about frontier model capability, and one is about this open research where people get to scratch their own itch, people get to discover vulnerabilities and so on, are at odds.

  • In fact, the open source community is doing just fine with smaller models at just 7 billion or 13 billion parameters, and that’s because the compute power of people’s AI PC or MacBooks are about that level. And so, I do not think having more open source activity there would necessarily contribute to truly fundamental frontier AI breakthrough that Tristan is worried about.

  • So I’m just self aware of how much jargon and technical detail people need to know, to understand everything that we’re talking about: Mixture of expert, “Mixtral”, 7 billion, 13 billion, it’s like I’m just aware that like most people aren’t going to get it.

  • So I’m wondering like a simpler way to talk about what we think the philosophy of what we’re talking about is the from an abstract level like just trying to think of the way we want to do this, because basically what your claim is… Go ahead.

  • The way it sort of breaks in my head is one of the principles I think you stand for, Audrey, is like the power should be at the edges, that people should be able to use AI the way that matches like their desires.

  • Because democracy should have the ability to retune these powerful AI models to be in service of strengthening democracy, and for that, it really helps for them to be open source and open way.

  • But we’ve also said at the beginning that the proliferation of open models preferentially harms democracies, because their information systems are open and so they are more vulnerable to flooding attacks.

  • And one of the challenges that we talked about at the Center for Humane Technology is that it is easier to tear down, say, trust than it is to build it up, although your work is exactly to find ways of rebuilding trust.

  • And so I guess there’s a threading of the needle that this question is about for how do you know when to balance releasing these models, open weight, and when do you know to hold them back so that democracies can be either resilient or antifragile?

  • And maybe, as you answer this Audrey, if you can introduce the idea of an AI, releasing AI that is offense dominant, and its effects versus defense dominant, because I think if people have that tool like what is an example of an offense dominant AI. And what makes, what is why is that scary or bad.

  • And then if we open source offense dominant things were open sourcing things that have more ability to take down and erode and destroy, then we have, we’re open sourcing things that have the ability to strengthen continue to maintain to grow to, etc.

  • I don’t know if you agree with that distinction but I think I’m trying to figure out a way. I think that honestly if I’m really honest, I think listeners can’t follow this, I think this is like so over the tops of most people’s heads, because it’s unfortunately just hard to get, but I’d like to do the best shot we can.

  • I’d like you to do the best shot.

  • When in doubt, fall back to pandemic analogies.

  • In 2020, Taiwan rationed out surgical masks. We have this communication meme that says “don’t touch your face with your unwashed hands, wear a mask.” This meme really worked, because obviously wearing a mask protects your face against your own hand, and it sidesteps all the conversations about N95 or whatever.

  • But if there is a way to make N95 and medical-grade masks at scale, very quickly, with everyday materials in everybody’s homes, that is the kind of technology we should open source. Everybody should set up a mask making plant at home.

  • And this is because medical-grade masks are fundamentally a defensive technology. It protects not only against coronavirus, but also the common flu. It is quite general purpose. And there’s no military use I can think of of that technology.

  • So the more a technology looks like surgical-grade masks, the more we should make it open source. And the more a technology looks like something a P3 lab would produce and that have a very high lethality rate, the more we should insist on regulations and also drills, really, audits and so on before it’s released. Or maybe it should never be released.

  • So I actually just want to pause and make sure that listeners really get this, because it is such an important concept, is the technology that we’re open sourcing defensive or offensive?

  • If you open source a mask, more people will have access to a defensive technology, so society gets more resilient, more robust, more protected.

  • But if you open source, let’s say gun designs, or open source bags of anthrax in every Walmart so anybody could buy a little bag of anthrax. They could do their own biology research, but that would be open sourcing in a way, or democratizing something that is offensive to society, even though some people could technically take that anthrax bag, and maybe do something that’s defense dominant with it.

  • The majority result would be that it would have an offensive, harmful effect on society, and the principle I’m hearing you speak to, Audrey, is just when we’re releasing something that is offensive, we should be in the conversation about preparing for the risk regulation.

  • How do we know who should have access to that? But we don’t just open source it. And actually, I just want to really highlight this for listeners, because as they hear other parts of our conversation, they might hear us disagreeing on open source, actually I don’t think that we disagree, and that this is a principle that separates out what we want to open source.

  • We want to open source things. What’s confusing about AI is that it’s dual use it’s both offensive and defensive. And one of the things is that when you open source AI, it actually teaches you things about the really advanced closed source models I believe there’s papers on, you can take an open source AI model, and you can actually use it to identify how to unlock the GPT-5, GPT-6, like really big advanced model.

  • So think about it we left the cats out of the bag, those are the small open source models, but if I like reverse engineer the cat it tells me how to unlock the lion from its cage. And that’s really dangerous. So, how do we tease apart, how we deal with open source AI, more broadly?

  • Yeah. Suppose that the world has no open source AI models, and there’s only two or three choices, not really choices, of generative AI models to choose from.

  • Then the defensive technologies will not be distributed. We will rely on those two or three model makers to implement, say, watermarks. The sentences they generate, the pictures they generate will have those invisible watermarks that they can tell.

  • Or not even watermarks, they just keep a record of all their output it has generated. And then you can always ask the maker, “Is this sentence generated by you?” And if there’s only two or three suppliers, this is a bulletproof way against deepfake attacks because you can always tell where it comes from.

  • But we do not live in such a world.

  • What Tristan just alluded to, is that you can always take a watermarked output from DALL-E 3 or whatever, and then use a small open source model to rephrase it. And once it’s rephrased, remixed, the watermark is gone and the original maker, OpenAI, doesn’t recognize the remixed picture anymore. But the remixed picture, it’s is as persuasive or even more persuasive than the one that’s generated by the larger model.

  • So we now live in a world where we really have to decentralize the defensive capabilities just because the existing rephrasers, remixers, are already there in forms of open source, and it’s almost impossible to ban them now, which is why we’re shifting left to the actor layer, which is why we’re saying to our people, no matter how convincing Audrey Tang’s video or SMS or website looks like, if it doesn’t come from 111, then it’s not Audrey.

  • So I just want to make sure to clarify for listeners that when you said “shift left,” you don’t mean shift left politically, you’re talking about the ABC model, the actor/behavior/content, and where we used to say how do we defend on the content level — “this is a deep fake” — we can’t do that because we’ve open sourced too many fake image creation, fake text creation tools.

  • So that watermarking won’t be a way to deal with it at the content level; we have to move to the actor or behavior level. That’s what you meant by shift left.

  • Yes, exactly. I don’t mean it in a left wing, right wing thing. It takes both wings to fly.

  • Okay, I think this is great. I think it’s an excellent level of accessibility for people. I’m super excited that this is yeah, this way that brought it down.

  • Can I get one of you guys to ask Audrey, a question please, about what are the other learnings for this for other democracies, like if you were going to go, and I know other leaders speak to you about what you’re doing in Taiwan.

  • Like the one takeaway?

  • What’s the takeaway? Right.

  • I’m going to say something quite controversial to a US audience.

  • The one take away that we would like to share, and this is controversial, from our January election, is to only use paper ballot.

  • We in Taiwan have a long tradition of each counter in each counting station always raise above their head each and every paper ballot.

  • There’s no electronic tallying, there’s no electronic voting, and YouTubers from all three major parties are practically in every station, with a high definition video cam recording each and every count in their own device.

  • So by using cutting edge technology, broadband, high-definition video, and things like that, only on the defensive part — that is to say, to guard against election fraud — we entirely pre-bunked the election fraud deepfakes that did appear right after the election. There is no room for it to grow.

  • Whatever the accusation was, you can find in that particular counting station three different YouTubers belonging to three different parties, that did have an accurate record of the count.

  • Because the information manipulation attack does not seek to support a platform or counter a platform, what it seeks is for people to no longer trust in democratic institutions. So it is the fairness and the secrecy of the election that they most target.

  • And to pre-bunk that, so far there’s no other better technology than ask each of our citizens, if you want to bring your high definition camera, which may just be your phone, and to contribute your part in witnessing the public counting of a paper-only ballot in your nearby station.

  • And still, we get a result within four hours or so, so it’s not particularly lagging.

  • But the key takeaway is make sure the ballot is on paper.

  • Yes, because that’s how people can observe.

  • There’s a grand irony in saying here’s the 21st century upgrade plan for 18th century democracies…

  • Live streaming a paper ballot.

  • …but it’s, I mean, it is both ironic but, you know, to Audrey’s point, it’s using the technology in a defensive way, rather than the technical offensive way; bring in the 21st century technology to make sure that everyone sees the same thing at the same time. It creates a shared reality to fight the disinformation and other attacks against legitimacy of the election.

  • Yeah, this is a pre-bunking and defense argument. And of course there are efficiency sacrifices, but these are minor.

  • We use paper here in Australia. Audrey, you’d be proud of us. Paper ballots only.

  • I have one more question. So that’s on making elections in particular more resilient, but zooming out in terms of what other advice you would give to other countries in terms of creating more and AI resilient democracies, I’m thinking like, if you were to advise the UK or the US or Australia. What do you think, aside from election resilience, they should be looking at?

  • I mean, in some ways that’s what this whole episode has been about, right, but maybe is it is a good to speak to policymakers.

  • Sure. Yeah, if you’re a policymaker…

  • If I’m a policymaker from any of the democracies that are having elections this year, and I’m realizing that this may be the year of the last human elections, because of the impact of AI.

  • What is the sort of upgrade plan what’s the takeaway that I can have about how I get this long suite of, you know, Audrey Tang upgrades, so that I have an AI resilient defense dominant digital democracy?

  • First of all, build connections to your local open source, civic tech communities. They already have all the tools. It’s just that they need a platform to connect them to the pre-bunking, to the counter information manipulation work, and so on.

  • Because in Taiwan, all the tools that we develop are developed in conjunction with international experts. We offer this digital gold card that anyone who have contributed for eight years or more on open source, on internet commons, Wikipedia, and so on, can become Taiwanese for three years and become permanent if they like Taiwan.

  • We have, for example, Vitalik Buterin, who builds one of the most resilient distributed ledgers in the world, to work with us to secure the security, resilience, and so on. And so this is an international community. Again, you can find many experts in this particular domain in Plurality Institute.

  • And really, just trust the citizens. The citizens mostly have already figured out the right values, the right steering wheel points of direction, for AIs and our technologies and our investment to go to. It was just the citizens have very small amount of bit rate, essentially just a few bits per four years to voice their concerns.

  • So simply investing in increasing that bit rate — so the citizen can speak their mind and build bridges together — does wonders to make sure that your polity move on from those isolated void vacuum of online meaning, so that they do not get captured by those addictive, persuasive bots, but can instead move on to alignment assemblies, to jury-like duties, to participate in deliberative polls, to crowdsourcing, fact-checking, and many, many other things.

  • The existing institutions are already there on the internet, so just talk to your local open-source civic tech community.

  • I’m just being a skeptic, how does that work in the United States? Like, you know, I’m having trouble recognizing that with with America.

  • That’s a great question. Just 10 years ago, in 2014, not only the administration was enjoying only 9.2 percent approval rate, but there was great polarization and discord in Taiwan. The way we overcame it is to treat polarization as a symptom, not a cause. The symptom is the lack of bridges, of credibly neutral institutions in the society.

  • So we — in the g0v movement — intentionally sought support from primary school teachers, from educators of all levels, from the national academy, which is beyond the control of parties or indeed the cabinet, the open-source bulletin board system built by Taiwan University students, so on and so forth.

  • So there must be some venues in your society that enjoys this non-partisan, all-partisan, credible neutrality. So build from there. And starting there, it could be very small, very local level, you then get the kind of space that promotes the bridging experience.

  • And once people have that peak experience of building a bridge together, they can afford then to build longer, bigger bridges.

  • That makes great sense. Should we wrap it up, guys, with one last goodbye question? What’s the goodbye question you’re thinking about?

  • I think you might have one of those. Well, there is the just making Taiwan playing this interesting role in the race to build AGI through TSMC. And so let’s see here.

  • And this gives us a chance to reinforce the cognitive labor point, which is always good to repeat. So maybe we’ll just do that. Audrey, this has been so incredible and we’re so grateful for you to provide a model and inspiration and hope that democracies could have an upgrade plan in which they can be resilient in the face of AI.

  • And that’s the 90 percent of the chips that are manufactured in the world to make to train AI are trained in by one company. So created. And it’s that 90 percent of the chips that are manufactured in the is that 90 percent of the chips that are used to train AI are built by one company, which is TSMC, which is one company in one country, Taiwan.

  • And so Taiwan is sort of printing the oil, the core resource that you need that will automate the cognitive labor.

  • Or the spice, you can call it that.

  • You know, it has this metaphor that. Yeah, well, is is wonderful metaphor that you know what oil was like a barrel of oil to automating 25,000 human labor hours of physical labor, so you can take 25,000 humans and get them to do something physical labor for an hour, or you can have just a barrel of oil and you burn it and you get the same energy out of it, and you can use that energy to move things in the world.

  • Well, AI is that for cognitive labor, sort of paying humans to think to draw, to do the market research, to trade on the stock market, to illustrate something, to make a video. I can just take an AI chip, and it’ll do that instead.

  • And, you know, this is interesting sort of moment where US-China tensions and the CHIPS Act and the US trying to prevent China from getting access to these chips. You can see from basically selling these most advanced chips to to China. And this is interesting parallel in history, which is that earlier, before World War 2, oil was the kind of thing that that drove the world industrial economy. And the US had actually.

  • Let’s see I’m trying to find here. When Japan occupied French Indochina 1941, the US actually retaliated by freezing all Japanese assets in the States, preventing Japan from purchasing oil. And it was actually this that many historians think had caused Japan to decide to attack Pearl Harbor, hoping that the US would negotiate peace, because it was sort of an embargo on their core ingredient, their ability to race.

  • So, I’m actually not sure where I’m going with this question, because we just think it’s a fascinating and super important irony, that I’m not teeing this up effectively…

  • I think what we’re getting at with this question, is that Taiwan is at the fulcrum of the sort of geopolitical rivalry, in terms of the situation with China, and also the fact that it manufactures 90% of the world’s chips.

  • And even more so, given the Taiwanese government supports TSMC, more than, say the US government would support any US companies. So, do you see that what’s what’s the tension there, and I guess it’s just, I mean what Tristan’s is getting at, I think, is it’s a particularly unique situation for Taiwan to have, you know geopolitically, and what’s your read on that, and where it goes from there?

  • First of all, it is true that the Taiwan-produced chips powers pretty much everything from advanced military to scientific to artificial general intelligence, to everything really. It’s one of the most general-purpose technologies imaginable. And because of that, I think people trust Taiwan.

  • People trust Taiwan’s supply chain to produce those chips; the SEMI E187, which is the industry standard we established to protect the TSMC and its supply chain against cyber attacks, and so on. And so, we enjoy the trust of people around the world and we take it very seriously.

  • We just established the AI Evaluation Center and we’re, I think, the center among the world’s counterparts to test the most broad range against potential AI risks. We test not just privacy, security, reliability, transparency, explainability which is standard, but also fairness, resilience against attacks, safety, especially societal evaluation, accuracy, and accountability.

  • And so by establishing the AIEC, I think we’re taking our burden quite seriously in that, yes, we did produce the chips that could potentially lead to the weaponization of artificial general intelligence, but we’re also taking it very seriously in making sure that we invest more to the defensive side the evaluation — and certification eventually — side, as compared to the offensive side.

  • After all, as we last talked, I pointed out to Tristan, at that time there’s 30 capability researchers to only one safety and alignment researcher in the AI world, and we’re doing all we can to correct this balance by doubling down on investment in alignment and safety.

  • So we’ve really got it clear, because a lot of people don’t believe that there is artificial general intelligence. Is Taiwan and are you working towards the assumption that there is, and that Taiwan therefore has a responsibility in how that develops?

  • Well, I mean there are general intelligences. We’re talking to a few ones at this very moment and they’re humans. And there are the kind of assistive or augmented intelligence that really helps us humans to become collective intelligence.

  • I talk not just about the noise cancellation and real-time captioned AIs that are currently helping us to communicate, but also the bridge-making, the sense-making AIs that are just going to be the assistance to not individuals, but rather groups of people, communities of people to find the common sense, the common purpose, the common meaning across communities.

  • And so whether you call this resulting intelligence an augmented collective intelligence, or whether you call it assistive general intelligence, I mean it doesn’t really matter, it’s really just splitting hairs. The idea is that machines should foster shared reality and human-to-human experience.

  • Machines should not replace humans in human-to-human relationships, but still in this there’s a lot that AI can do to foster the human-to-human connections, so that we become a better augmented collective intelligence.

  • If you had a sense of the Taiwanese government had a sense that AI was going down a path which would be bad for humanity. Would the Taiwanese governor, would you advocate withdrawing TSMC from the market to stop that happening?

  • The reason we set up the AIEC, and we correspond closely with the US NIST AI risk management framework and its task force, and European counterparts with the ethics guidelines for trustworthy AI, the UK counterpart, the AI Safety Institute — the list goes on — is that we’re like crossing a kind of frozen sheet of ice above a river, and we don’t quite yet know which place in that ice sheet is fragile.

  • We cannot know just by people blindly racing toward power, toward capability, toward dominance, and in the worst case they fell down the water and the ice sheet explodes and everybody fell down the water, right? That’s the worst case scenario.

  • And so what we’re advocating is a race to safety, a race to not increase the speed but rather the steering wheel’s capability, the horizon scanning capabilities, the threat intelligence network so that we can let people know when a small scale disaster is just about to happen. And if it doesn’t happen in Taiwan — well it didn’t happen in Taiwan for this election — maybe in some other elections in some democratic countries.

  • And as soon as a correlation, not necessarily causation, to artificial intelligence is found, we implement liability frameworks. So for example in Taiwan right now, for some months now, if a deepfake scam asks people to invest in crypto or whatever is posted on a social media that earns money from advertisement — say Facebook — and Facebook even after they know that such thing exists, did not de-platform it, then after 24 hours Facebook is not fined, but if somebody gets conned 1 million dollars, Facebook in Taiwan is now liable for that 1 million dollars.

  • So this is re-internalizing the negative externalities, making sure the company that profits from this use of AI will also burden the harms if they do not check the harms.

  • We have implemented this in our law, not just against financial scam, but also against election meddling, against non-consensual intimate images, so-called deepfake porn, so on and so forth.

  • And so, I think we are cautiously optimistic in our horizon scanning capabilities, so that for each harm that is being discovered, we design a liability framework and if it doesn’t work, then we design the countermeasures defensively and only when that fails to work, do we talk about more drastic measures. Hope that answers your question.

  • Well, it does and it doesn’t. I’ve been super-impressed at those at those at those some laws — it’s amazing but those laws clearly don’t exist in any other democracy in the world. So, the chips that Taiwan makes a still feeding into those harms, where they’re created in other countries, where there aren’t those laws.

  • So there is some kind of burden of responsibility still, on the manufacturer of those chips. I mean even if you are looking at controlling the harms in Taiwan, the rest of the world doesn’t have that capacity. So I guess there is some moral responsibility there, isn’t there?

  • I mean, these are great examples and other democracies should adopt liability laws, just like what you’re talking about. If we keep pushing this all the way to the extreme, where you’re saying, okay, we’re on this thin ice, it’s getting thinner, but no one knows exactly what the breaking point really is.

  • And so we’re keep racing to go as fast as possible, with heavier and heavier venture capital, billions of dollars, to build a more powerful thing that’s racing across the pond, because of the promise that if I do that and I really unlock the, the Lord of the Rings ring, at the end of that, and that process, I get access to automating all science, you know, colonizing the galaxy, etc.

  • But there’s this point, where we don’t know where the ice is going to break, underneath our feet, as we’re racing to get that golden ring. Is there some critical point, Audrey, where there’s something else that would need to happen some other emergency break, whether it’s TSMC shutting down the flow of chips in the world or something else? How do you think about that question, because that’s not that many years away.

  • And this did happen, right? This did happen.

  • People saw very clearly — back when I was a child — that the ozone layer is being depleted by the refrigerators of all things, because the Freons, the chemical compound used in it, was rapidly depleting the ozone protection layer.

  • I think the point I’m making is, if we’re racing blind — if nobody knows that ozone is being depleted back then — then yes, drastic measures are called for when we suddenly discover that we’re all going to die from cancer, right?

  • But because people did invest in the sensing capabilities, and also the commitment across the political spectrum on inventing a replacement, even though at the time they don’t actually know what the replacement for Freon would look like.

  • But through the Montreal Protocol, basically they set a sunset line, so that by year this and year that, we are committed to find commercially viable replacements.

  • And after that point, after the Montreal Protocol, any investment into the bad old ways of manufacturing coolants are no longer cool, right? They are basically committing crimes against humanity.

  • And so, we need more Montreal Protocols against specific harms that AGI could bring.

  • And I totally agree with you that we need to continue our message, basically treating this as seriously as the pandemic or the proliferation of nuclear arms, or even all the way to climate urgency. And only if we continue to do that, can we create a moral pressure on the top labs, to commit to this kind of sensing and safety measures.

  • And in this, I think, you know, we deeply agree that we are currently racing towards a very dangerous, uncontrollable, dark outcome. And we agree that there needs to be some form of international cooperation, international agreements, and, you know, agreements between really any of the top labs that are racing to that outcome, so that we know how to manage it as we get close.

  • The difficulty, I think, is the ambiguity of where those lines are, and the many different horizons of harm, and the different kinds of risks the range because some could argue that without the demand that Audrey Tang upgrade plan to democracies, the existing AI that we have proliferated, is enough to basically break nation-states and democracies already. And so, there are already risks that have the ability to break the very governments that we would need to be part of those international agreements, and their legitimacy.

  • So, I think part of what I want to instill, hopefully, at least my sort of take in this conversation is, and listeners, is that we need to create a broader sense of prudence and caution, and correct that 30-to-1 balance, from 30 times more people racing to increase the power, to instead flipping it to 30 times investment into safety, coordination, care, and an understanding of those risks.

  • Your work as an embodiment of foregrounding and making primary the vulnerabilities, the fragility of society so that we can care for that, and instead focus the incentives on the bridging, the health, the transparency, that the strengthening aspects of society.

  • You’re a genuine hero, in the fact that your work is not only an actual plan and possibility space to upgrade democracies, but is also factoring in the race for AI itself, and what it will take to correct that. So my hope is that people will turn around this episode as a blueprint, for what it could take to get to that place, and we should really do more conversations.

  • And just to bring it up, I was gonna say just to bring us back to the very beginning of the conversation. When we’re talking about AI being used to find loopholes in law, this sort of the punchline of all of that, is that law is the thing that can bind technology and make sure that AI like has some kind of bounds to it. So if AI is used first to break law, then you lose the ability to bind AI, and we face a kind of checkmate.

  • The exact same thing is true of governments and institutions, and the legitimacy that if we don’t follow an Audrey Tang style blueprint plan quickly enough, then the very thing that would let nations coordinate and have the legitimacy — to slow the race to a speed at which it’s safe — will fail before we can do that.

  • So, I just want to echo what Tristan said: Your work is a bright spot, perhaps the bright spot in the world, for the direction that we can move. To use one of the phrases I love from you, is that what you do so well, is you don’t demonstrate against — you demonstrate with. You demonstrate by showing what the adjacent possible is. So thank you so much Audrey.

  • Yes, every demonstration is a demo, and we’re happy to share what we have learned from our demonstrations in Taiwan.

  • A key lesson we have learned is that, in cybersecurity, taking control and maintaining control of a system, from an adversary’s viewpoint, is not that easy, actually. To deny its service, to overwhelm it, is much more easy and can be mounted without much risk of retaliation.

  • So, in the coming elections around the world, look for those overwhelm attacks on your security, but also on your legal system. On information manipulation, of course, but also on polarization.

  • You’re all going to be overwhelmed, but you can counter that by pre-bunking and by working with the people, not just for the people.

  • Perfect point to end on. Thank you so much.