Well, technically all three parties won, but you got the primary seat there. So, yeah, do you want to walk us through how did you… Yeah, go ahead.
What I would love for you to do, is walk through some of the things that you’ve done, maybe through the vehicle of how you secured your last election — which just happened and congratulations, your party won.
But you, Audrey, have done an upgrade plan. If you spend this many billions a year, at you do all these upgrades across your information system, your cyber security, your trust security, your polarization security.
And so, it suddenly feels like every lock, or every sort of security that we had, and how our society worked is coming off. And obviously, that’s like the place that I think we left our listeners, when we did the AI dilemma talk.
Because AI can generate automated lobbying, automated robocalls. I believe in New Hampshire, there is a fake deepfaking of President Biden telling voters not to vote. This just happened in the news recently.
So then, a lot of people listening to this, I think can hear this as like, oh my god, I suddenly feel like my society is way more vulnerable than I thought it was.
Maybe just quickly, before we get into the rest, can we define… what was the term you use that I don’t think people knew. Really quickly, what’s a denial of service attack?
Oh, yeah, sure. Maybe as you see the threat landscape for how are democracies vulnerable to the threats posed by AI?
Yeah, sure. That’d be great.
And the reason we wanted to have Audrey on is she’s the best, I think, living example of what would it take to upgrade from our 18th century democracies to some kind of 21st century democracy that is resilient to AI that is no longer vulnerable. So, Audrey, we’re so ...
Well, what that vulnerability is for Windows, AI is for democracies because democracies are suddenly super vulnerable to how AI can generate misinformation. They can find loopholes in law. They can generate new cyber exploits. They can do all these things that leave this democracy that we’re all living in ...
We think about, well, what is our democracy? What are our democracies like in the world today? And they’re kind of like this old software platform, like imagine your computers running Windows 95. Windows 95 was great for a long time. And then suddenly someone releases some new cyber hacking ...
Welcome to Your Undivided Attention. Aza and I are so excited to have with us today, digital minister of Taiwan, Audrey Tang. And the reason that we wanted to have Audrey on is when we think about what will it take for AI to go well with humanity?
So I’ll say a more polished version of that. But when I just think about what we’re trying to accomplish together, when you think about like it’s less, we’ll interview you. But it’s also like we’re on the same side of the table brainstorming. What is a blueprint for this ...
I wanted to state, at least Audrey, just to restate the intention that we had and where we were so excited to interview you here, which is in my mind, what would happen with this is that we’re recording a sort of a one hour blueprint for how you upgrade ...
OK, great. All right. Well, I’m going to do this semi-informally, semi-formally.
Audrey, good to see you. All right. Let me start my local recording. I am now recording. Good.
But yeah, if you’re willing to do another one of these in a little bit, we can space it out and really prepare for the questions that we want. We’ll record the transcript. We’ll review it and develop the questions and then it’d be really great to do that again.
Of course, there are different stages in that diagram that we drew. Like it’s an obstacle course and we’ve got to make it through first contact, second contact, and then when we get to recursive self-improvement, there’s a whole other set of questions.
Yeah, we’re very much, thank you, Audrey, like truly grateful for your insights. And I think there’s actually some potential pathways here. It’s really, really inspiring. And Aza and I talked to a lot of people about endgames, and people do not have good ideas about how we get to ...
I know that we’re at time. This is a… Great.
And that FIDO is built into something or is this…
I don’t. What is FIDO again? I’m sorry.
Do you have a vision for the actual way that you can see the world in Western democracies doing the zero knowledge proof identity thing? Like, again, which thing was it? Is it a world coin or the orb? Is it some crypto thing that I don’t know about that ...
Pass-keys, uh-huh.
For identity, right? And you’re for identity, just to make sure I’m not…
But then Elon bought Twitter with the express purpose of wanting to stop all the scammers and the bots. Do you think there is an obvious, easier set of extreme measures he could be taking, but he’s not simply because… Like, if Twitter was liable for, you know, all the ...
Right, I mean, we have this meme that we came up with, that freedom of speech is not freedom of reach. I think we need to change the meme to connecting reach to liability, because it’s the volume and the scale and the amplification that drives up the responsibility. Reach ...
So, in there, the solution, according to this, what we’re seeing here is like, unless people label that they’re not a real, they’re not who they say they are, which of course they’re not gonna do, then you’d make Tinder or Bumble liable for-
I’m thinking, Aza, in the MTC report in 2021, that there is $500 million in romantic scams from basically Tinder, right?
Wait, say more about that. How would you actually, what form of liability if they don’t race to safety?
So that’s, I think, where my heart daily is placed. So. I’m just going, yeah.
I don’t know. I guess I’m just trying to still get to, at the end of the day, I care about the world, not turning into Mad Max or catastrophes or dystopias, which are the two outcomes. And I’m wondering… and that seems to be like the center of that ...
But then I just, I don’t feel good about where this is going and I have my, I worry that like, there is a reason, there’s a very strong reason why I feel that we’ve articulated a bunch of it. Some of the things that we might be worried about, ...
Where do you want to go from here, Aza, with the 10 minutes that we have left? Because this is really inspiring to hear… these totally novel ways of potentially applying this in a way that is really deeply, deeply hopeful. I still wonder about, you know, the unease that ...
Yeah.
You’re not starting with individual agents. Like you have an agent, I have an agent,Audrey has an agent, all of them have to model us before they can create a deliberation for the three of us. And so, it takes a while to build up to full deliberation. And Audrey’s ...
Yeah. And then you invite a smaller subset in to do the longer deliberations. And that thing becomes the grounding in the future. Because you’re just like sampling, it’s like taking a blood sample. Not like draining full blood of the organism. Super interesting. That is a much better distribution ...
Yes. Exactly. And so, then you can have, of course, those agents, like this meta agent talk to simulate like larger scale deliberations. And you don’t have to apply it just to like what AI does. Now that you have like, all right, we have a sample of what this ...
I mean, another way of saying, sorry, I’m just like slowly letting my brain catch up with what Audrey is saying is like, you have, essentially what you’re doing is you’re aligning an AI to the deliberation process of a specific set of people that you can apply to any ...
Yeah, this is fascinating. It’s like, yeah…
But wait, this is really, I’m tracking, if I’m tracking correctly, this is really fascinating to me. You’re saying you do the deliberation, you do the online thing, which sets the agenda, the face-to-face thing then debates that agenda. They come to agreement and synthesis. You find the bridging statements. ...
The things that you won’t know after the fact, when the complexity is there, how does that actually work on the upgrade that you’re talking about? But the rest of it, I’d love to hear Audrey’s.
The challenge that I’ve always struggled with is how do you know that a policy works because you’d have to wait 10 years and we’re not gonna have 10 years for a lot of these things.
Alpha deliberate, alpha synthesize and consensus and simulate things at a fast, simulate the deliberations at a faster scale than they could have happened otherwise. And also learn from what worked.
And so, it’s great to be able to scale you, but I guess, sorry, I’m just kind of catching up because I’m, just so you know, it’s been a late long day. We started at like 8 or 9 a.m. So, we’re… I’m totally here for this, but like, I ...
I mean, so the examples, so here’s a couple of things. The 23-year-old influencer on Snapchat, this girl who then said, she made a girlfriend as a service version of her. So she made, excuse me, a digital avatar of herself where she basically sells access to her as a ...
Someone would have to create an environment in which it’s easy to spin up such a set of things. Like, and I don’t know, yeah, I don’t know. But it seems like there have come… I mean, the problem is that bots were already a problem. So, like it comes ...
Yes, exactly. There’s obviously like fractal levels of this phenomenon assumptions. But the question though with that one is, as I’ve always wondered like, for that to actually be a threat that’s like on the top five list of like major things being worth being worried about…