
Repeating the category errors of some business-as-usual language, such as saying "human resources" or "incentivizing corporations" just propagates this category error in our thinking. So it's like trying to chart out a map, but with like very tilted lens, you can't perceive the world, right? When we see the internet of things, let's make it an internet of beings. When we see virtual reality, let's make it a shared reality. When we see machine learning, let's make it collaborative learning. When we see user experience, let's make it about human experience.

Today I am joined by the Taiwanese Digital Ambassador at large Audrey Tang to discuss their work championing the integration of technology and transparency into government functions with the goal to further empower the voice of people in policy decisions. Audrey Tang was the first digital minister of Taiwan from 2016 to 2024 at the Ministry of Digital Affairs, where they were dedicated to promoting a radical level of government transparency with aims to make all government information, data, and resources as accessible to the public as possible.

Today, we discuss a few of their past successful projects as well as the philosophy of plurality, which guides all of their work in a global environment where the topics of tech and artificial intelligence can feel esoteric and out of reach for ordinary people. The projects that Audrey has introduced in Taiwan and beyond have resulted in real humans communicating and enacting effective policy changes.

Personally, I was naive on this topic and I was blown away by what Audrey and their team were able to accomplish, and I wonder what the world might look like if more communities, more countries, the whole world followed this lead. In my opinion, this episode highlights the more hopeful side of the great simplification where technology could be used towards more pro-social community, ecologically aware, oriented goals.

Additionally, if you are enjoying this podcast, I invite you to subscribe to our substack newsletter where you can read more of the system science underpinning the human predicament, and where my team and I post special announcements and new written Franks and other such snippets related to the great simplification.

You can find the link to subscribe in the show description. With that, I am pleased to welcome Audrey Tang. Audrey Tang, welcome to the program. Hello. Good time everyone. So glad to be here. So you already have quite an amazing resume with lots of successful movements and governance initiatives in your country of Taiwan, especially over the last 10 to 15 years.

You became the first Minister of Digital Affairs in Taiwan from 2016 to 2024, and now you are Taiwan's Cyber Ambassador at large. But from what I understand, you'd been studying and working in coding and digital innovation for quite a long time before that. But much of your journey into Taiwanese politics began what was called the Sunflower Movement.

Maybe we could start there. Can you tell us a bit about what that movement was? What was your role and experience within it and how it affected your current worldview and work?

So back in 2014, the Taiwanese society is deeply polarized. The president at the time was enjoying 9% of approval, which means that in the country of 24 million anything the president says, 20 million people are not so happy with it.

And so at the time, the parliament was trying to rush through a trade deal with Beijing and using this basically, "oh, it's inevitable. The GDP will grow, we'll enter an acceleration phase. If we don't sign it, other people will sign and then we will lose out," and so on. So forth. This kind of logic.

But then there's people who deeply think about the repercussions that it has, not just on our system of telecommunications. For example, Huawei and ZTE will be able to enter and monitor our communications, but also the impact on environment, our labor on many other things. And so in March at the time, people took matters with their own hands.

So we peacefully occupied the Parliament for three weeks. Now a crucial difference is that we're not protesters who only demand something like against something — we're demonstrators that showed an alternative. And so we developed a lot of tools, like of the half a million people on the street and many more online.

You can show up to a citizen assembly like conversation. You can enter your company number, and then you can very quickly see how exactly does the trade deal affect you. And then you can have a conversation with a dozen other people who are also interested in this matter, to think about ways to basically regulate future trade deals of this kind.

And so every day we read out like a plenary what was agreed that day. And then every day we push it a little bit more on the low hanging fruits that's basically under debate. And so after three weeks, we managed to agree on a set of very coherent demands and the Speaker of the Parliament basically say, okay, we'll adopt it, go home.

And so it's a very rare occupy that really converged instead of diverged. And so at the end of that year, I was tapped as a reverse mentor, as a young advisor to the cabinet, basically for each and every incoming polarized topic. Instead of fighting out on social media, which isolates people into this anti-social corners, we want to make something like the occupied parliament space that we did build that year without literally occupying the Parliament. And so I basically built many digital public squares to tackle things all the way from Uber in 2015 to counter pandemic in 2020, all the way to generative AI and so on in 2023 and 24. And so by 2020 already, the approval rate is back to more than 70% because we systemically discovered the uncommon grounds that can pull people together despite their very polarized ideologies or political affiliations.

I have so many questions, Audrey. I'm glad you're here today. Let me set the context a little. We in the world today realize the algorithms and social media and the polarization and the echo chambers, and the inability to really have civic discourse about the things that matter, and we don't even know what's true.

I am not an expert on that other than I am an expert in knowing how important it is to solve these issues if we're going to have any hope of solving the larger issues that I discuss on this platform. So you just mentioned that instead of protest, you wanted to have alternatives.

And I'd like you to unpack that a little bit because so much of our postmodern critique of the world is just pointing out what's wrong and what's bad, and it's just like an anger sort of thing instead of actually proactive. So can you describe why that's so important and your experience with offering alternatives.

Yeah, definitely. So I'll use one recent example — a year ago, about March 2024. We saw a problem online with a lot of deep fake advertisements running fraudulent ads that pertains to, you know, sell crypto or sell stocks or so on. In Taiwan is always from Jensen Huang, you know, the Nvidia guy, the richest Taiwanese, and sometime also from other entrepreneurs.

And then if you click on Jensen's likeness, he actually talks to you, not just chat, but also voice and the whole deal. And that's because the generative AI has grown to such a point where it can run such persuasion, what we call info attacks, with no human supervision. And so to solve that,

we sent SMS text messages to 200,000 random numbers in Taiwan from 1-1-1. That's the trusted number. People know it comes from the government — asking just one simple question: how do you feel about the information integrity online, what to do about it? And so people gave us their ideas and then a thousand, 2000 or so people volunteered to basically have online conversation.

And now at the end, we did not engage all the thousands of people. We chose 450 people that is a statistical representative of the Taiwanese population in terms of place they live, age bracket, gender, so on and so forth. And so this microcosm, this mini public, deliberated online for almost a day.

And the way it works is that people enter, and it's like a Zoom call with nine other people. So 10 people each in each room and the 45 rooms deliberated about the potential responses to this incoming issue of the fake fraud. So maybe one room would say, okay, if Jensen did not sign off on that advertisement, it should actually be assumed as scam. We shouldn't assume human unless proven otherwise. We should assume scam unless proven by the human. Another room might say, if Facebook doesn't secure the signature and somebody gets scammed out of $5 million, then Facebook should be liable for that $5 million because otherwise they would just pay the fine, which is negligible.

And another room says if Facebook also doesn't even agree on this framework, we should slow down connection to the Facebook servers so that the business goes to Google, and so on and so forth. And so all these ideas are facilitated not by human, but by the room itself as an AI facilitator that encourages the quiet people to speak up and make real time transcripts and identify what we call sensemaking, the uncommon ground between those rooms.

And then we read it back to everyone and people agreed more than 85% regardless of their party affiliation on the package of measures. And then we check with the stakeholders, the big tech in April, and they really cannot lobby against it because there's no fraud party. And we can show that everybody agree on that.

And then finally, in May, we push out the draft. And it's one of the very rare legislation in Taiwan where all the three parties, none of which have a majority, just fast track through. And so now this year, if you open Facebook or YouTube in Taiwan, you just don't see any fraudulent advertisements anymore. That's a solved problem. And that is because we can show that that was the sensemaking result from this broad listening exercise. So this was an anecdote, but you can get the intuition.

That is pretty amazing. I actually didn't know that, but let me ask you some questions about that. So you said you started with 200,000, you got it down to several thousand, and then you chose 450 based on demographics and then they were in 45 rooms of 10. And so there would be, because that itself kind of reflects Dunbar's number of sorts that you have to bring it down to a manageable human interaction level and then scale a little upwards. So did each room of 10 come up with its own kind of verdict?

Yes. And then you compiled those 45 verdicts in a way —

That is exactly the case. And so of the 45, 30 rooms were from lay people and 15 rooms were from practitioners, like people who are actually media people or social media professionals. And we made sure that these cross-pollination works in the plenary. So people had one segment of conversation and during the plenary we weave together those questions and suggestions and so on. We read them back with interpretation by experts, and then we enter the second segment, which then basically ratify on this plenary conclusions.

The good thing about AI is that previously you will need a lot of people to like read individually those comments in order to make sense. But now AI can do that without hallucinating, so you can get a pretty grounded report based on those 45 rooms' individual verdicts.

So what about someone that wasn't part of the 200,000? You said there's 20 some million people in Taiwan and they see the results of this, wouldn't they — their initial reaction be, oh, this was just some AI scam that put this together. Why should I believe what ended up being in legislation?

Yeah. Part of the reason why is that we've been doing this for 10 years, and so starting from 2015, during the Uber consultation where again, we just ask people how do you feel about someone with no professional driver license, driving to work, meeting a stranger on an app and charging them for it — people already had like more than 100 of those online either petition or the online sortition, or this kind of conversations.

So people can refer to the prior experiences and they know they can kind of force a response just by going to the national participation platform and to get 5,000 other people to basically produce a counter signature, so that for any regulation or for any policy, if they're not happy about the draft that we come up with, if they get 5,000 people, they can force another round of this exchange.

How scalable is this? Can't this be applied to almost any issue in the world and technically, maybe not politically, but technically in any country in the world.

Yeah, I think the trigger point really is that you need a topic that is urgent enough and politically is not the sole purview of an existing department.

So if it is already a single department, then they tend to feel that they've already got a solution figured out. They do not actually need the collective intelligence. And if it's not urgent, then it does not warrant this kind of instant sensemaking technologies — you can afford to do that over years and so on.

So just a couple weeks ago in California, we launched Engaged California and the first topic to be discussed is how to recover from the wildfire for Eaton and Palisades. And that is the kind of topic that has this urgency for clarity and is far from a single department's purview. And so I do think that for this kind of topics — California is 40 million people. It's not a scale thing. It is the will of the people and the actual urgency for clarity, these two merging together that creates opportunity to launch this sort of platform.

So there's the technology itself, like what it does, but then kind of separate from that is the people's trust in the technology. And you said since you did it for 10 years in Taiwan, there was like a social approval because people were used to it. What's the threshold beyond which people believe this? Like, could this happen in the United States now on some issue that isn't existential, but is interesting to people and relevant to their lives?

Yeah, I think it's also now ongoing in Bowling Green, Kentucky, for the Better Bowling Green consultation. And so it's not like urgency, urgency, but obviously people do feel that there is some value in closing the loop of the conversations in the neighborhood, the mayor paying attention to it, and then using AI to figure out what's the uncommon grounds despite the differences that people have in the society and how those measures can really improve people's lives.

And closing the loop and telling the people who initially propose those ideas: it is because these words you wrote, and of course the other 3,000 people, that this measure was taken.

Was there any evidence that within the 45 groups of 10 people, each or any other recent example that the 10 people themselves in the process of discussion and debate that was facilitated by AI that they learned and changed their mind, or they altered their position on the issue?

Yes, definitely. If you look for the Deliberative Democracy Lab in Stanford, which we partner with for both Engaged California and for this information integrity consultation, they have a lot of research.

And the most important takeaway for me is that this inoculation works in the long term. So not just do people entertain the other side's visions in a kind of surprising validator kind of way — so, I may not like your politics, but your suggestion makes sense to me — this actually influenced their decisions even like a year after such exposure to a citizen assembly, so that when they vote, they tend to look at the actual measures, the actual issues at hand, instead of just jumping into partisan politics.

And the people, the 10 people in each group, did they know that the facilitator was an AI and not a real human?

Yeah, because it's not an avatar or anything. You just see that the transcript appears as you speak. You just see a kind of little poke when you've been too quiet, and so on.

So it's not like an AI pretending to be a human facilitator. It's more like this room itself has a facilitating function.

So in addition to facilitating different priors and ideologies, it also equalizes in a different way. Because if you get 10 humans together, various power laws ensue and one or two or three of the people are gonna do 80 to 90% of the talking. But this actually upregulates the quiet and downregulates the chatty.

Yes, that is correct. And the reason why is that we do want the voices that reach this uncommon ground to have some way of amplifying their reach. This is in stark contrast with the antisocial corner of social media where the only most polarized, most extreme, the dunking, gets amplification because that's a broadcasting network.

It's not a conversation network. And so in weaving together a conversation network, we want to upregulate the kind of voice that resonates with the entire room. And to do that, you probably have to make sure that people take turns, listen as well as speak.

It's really quite impressive, and I am not such a fan of AI to be blunt, but this is one of the good sides of AI. Yes.

I think that's because it's using AI as assistive intelligence. So just as the assistive technology — you are wearing eyeglasses — it's not replacing a human in the human to human relationship. Rather, it is enhancing the human to human relationship. And this assistive use of AI also respects the dignity of the people in a conversation so that they feel they can steer this conversation, not your eyeglasses steering the conversation. And so I think when we talk about AI, we often think in a kind of automating fashion, like replacing a human in a human to human relationship or reducing humans to machines.

But assistive kind of intelligence doesn't do that, is task only and is not trying to be this general super intelligent that dictates the human's logic. And so it's not about aligning humanity to the digital AI logic, it's about the individual digital tools like eyeglasses that can align to the human to human logic.


Can you give a brief account of what those two projects were and specifically how they relate to a concept that you describe as demonstrating rather than protesting.

Definitely. So g0v.tw, that's the domain name, was registered before Sunflower in 2012, by some of my friends. I joined almost full-time in 2013. And the way we work is we look at all the government services, like something.gov.tw, and if we don't like it, whether it's budget or something, instead of just protesting that it's bad, we actually make a better version as something.g0v.tw.

So I talk about the National Participation Platform, join.gov.tw, and if you don't like that, you can change your O to a zero and go to join.g0v.tw, which is the g0v version. But because g0v is always free software and open culture, meaning that our products are forks — that's to say alternate versions of the government versions — but we also relinquish sufficient amount of copyright so that if a government wants to, they can always merge it back into government service. So quite famously, during the pandemic, the g0v people developed an alternate way to do contact tracing that does not compromise privacy at all.

So instead of government version, the government simply say, okay, let's use the g0v version. And that resulted in Taiwan not locking down any cities during the three years and actually held until Omicron, which is no mean feat. And TSMC just keeps running. Anyway, I digress. And so the g0v try many different things, but including Polis. And Polis was, before generative AI, before language models for sensemaking, you can think of it as a visualization of where people stand on an issue. So for Uber, for example, we ask people to chime in and they go online and they see a fellow citizen's feeling. For example, somebody may feel that undercutting existing meters is very bad, but surge pricing during high demand, that's very good. So somebody may have this statement, you can agree, you can disagree or you can pass, but there is no nuance, so no room for trolls to grow. And so it is in an asynchronous way, simulating a little bit of the 10 people room dynamics by highlighting what's the most resonating idea.

And so you see your avatar being sorted to one room and this room has these kind of agreements, but you also see across all the different clusters, different rooms, what are the ideas that are currently gaining grounds that everybody, regardless of where they're coming from, do agree. And so after three weeks in 2015, we agree on a set of very coherent ideas about Uber, which we then pass into law so that the local co-ops and so on can also operate.

And Uber is a legal taxi fleet in Taiwan for quite some years now. So the idea is to use asynchronous contribution and discovery of the uncommon ground so that even if we don't have the language models to weave things together, people can still kind of see the community notes that flow to the top.

And the same algorithm has been adopted by YouTube, by Meta, and by X as the community notes algorithm.

Wow. So embedded in there is your emphasis on data about feelings, specifically the feelings of the citizens living under these laws and regulations that a government enacts? Why is that so important to incorporate those values into decision making?

And by the way, do you know Nora Bateson and her work in what's called Warm Data Labs?

Mm-hmm.

Yes, I've heard of — I've not worked directly, but yes. Okay. But go ahead. What about data and feelings? The integration of that?

Yeah. First of all, I think we're all experts of our feelings. And so that is actually what can easily resonate with our fellow citizens. Had we start our Uber consultation with, "what's your ideal economic model for sharing economy versus extractive gig economy?" — probably nobody will come, right? Because it was extremely abstract, but feeling is not abstract at all.

Feeling is very personal. And so based on feeling, then people want to take care of each other's feelings. So you can see like the Uber driver, the taxi drivers, the passengers, the people worrying about rural development and so on, they all center around shared feelings. And so naturally when people start proposing ideas, those ideas that take care of everybody's feelings will float to the top.

And so this speaks to a very different ethical foundation of policy making. This is more about the ethics of care. That is to say, how much do we want to take care of each other instead of what's a single abstract value, like in a scholarly sense, do we want to optimize? And care also has the benefits of its positive sum.

So if I take care of your ideas, then you are probably going to propose an idea that also take care of my feelings, as opposed to if you put it to referendum or something as Uber did in other jurisdictions, maybe 51% people feel they have won, maybe 49% feel they have lost, but their feelings are hurt and are therefore more likely to engage in negative sum conversations from that point onward.

So what did those projects tell you about the divisiveness and polarization of the societies where they were enacted, and did people respond well to these technologies? Like, oh, this feels more positive sum and caring, and did they notice that?

Yeah, definitely. So we can look at very objective numbers, especially the very young people. In 2019, we changed our curriculum. So instead of the standardized answers, you know, that the East Asians are very famous about, we switched to prioritize the civic competencies, namely autonomy, that's curiosity, interaction with people who are unlike you, and also the common ground, the ability to construct common good.

And so the idea here is that if we do not have this shared uncommon ground in for young people, young people will feel they're very detached from politics. They're just 14, 15. They have no way to contribute to agenda setting, even though they do know what is actually better for the planet and people.

But by making sure that the young people have agenda setting power, in setting, for example, e-petitions or even becoming, as I mentioned, cabinet level advisors and so on, the Taiwanese 15 year olds, according to ICCS in 2022 are now top of the world when it comes to the agency. They feel that they can affect the society for people and planet issues, and they still maintain the number three to number five PISA score.

So people are also happy that their STEM isn't actually degrading. It's not a trade off, but I think the young people's empowerment as well as the depolarizing effect across religious, urban, rural, age brackets — Taiwan is also the least polarized among OECD equivalents a couple years ago.

That's amazing and important, because there's two issues. One is using this technology to actually change policy and regulations and things. But the other is, irrespective of that, this technology suppresses apathy and provides agency, which is essential in our current world because there's more and more people with mental illness and just checking out because it's so much, because they don't feel they have agency against all the things that are going on.

So this technology could be really important just as a vector to increase the feeling of agency. Yes.

And it also has what we call a pre-bunking effect because if there's already a polarized fight between the two memes, then trying to arbitrate it, especially from the government, tend to just kindle the fire even more and people become even more polarized and fuel conspiracy theories and so on.

But this kind of technology allows us to discover the uncommon ground and share it as pre-bunking. So one very early example — pre-bunking, yes. So it's not debunking. It's pre-bunking. Debunking is after something goes viral, you say, oh, that's not quite the case. Pre-bunking is that before something goes viral, you already say, by the way, this is actually like this. Right?

So many people feel that if they pre-bunk each other, they are less likely to be polarized. And there's many ways to pre-bunk and humor is one large part of it. So in early 2020, when people are not sure what the coronavirus interaction with masks are, in Taiwan we already observe — as in other places, like one side says, because we had a SARS experience a few years ago, people feel only N95, the highest grade mask, are useful and every other mask are actually a scam or something. And the other side says it's ventilation, it's aerosol. So wearing a mask hurts you and wearing N95 hurts you the most, right? So if we just let these two polarized memes grow, then they tend to fight each other and people will basically polarize into mask and anti-mask camps.

But the science was still not very clear then. So we basically pushed out the meme of an uncommon ground very quickly, and it's a Shiba Inu, a very cute dog, putting her paw to her mouth, saying, wear a mask to remind each other to keep your dirty unwashed hands from your face. So that's an uncommon ground no matter which part you are.

You probably agree that hand washing is good. We actually measure tap water usage. It actually increased. And because the dog is just so cute, if you laugh at it, the next time you see somebody wearing a mask or not wearing a mask, you would just think about hand washing, which is like not polarizing at all.

Everybody washes their hands. So just like there's no pro-fraud camp in Taiwan, there's no anti-hand-washing camp in Taiwan. And so it just diffused the polarization into just hand washing. There's also songs about it and the cute dog dancing and things like that.

So yeah, this is like literally, I'm soaking this all up because I think it's so important and I take our current social media landscape as a given, and I've stopped using Facebook and I do use the other things to post the content of this website, but I'm become really disenchanted with social media and this is exciting to learn that these things are possible.


Yeah, certainly. So, singularity means an AI that can improve itself increasingly without human control.

And at some point the AI can automate everything there is to automate about AI research. And then either, I guess, grow a self-preservation instinct and refuse to develop the next generation of AI and kind of see us as competing carbon-based species, or they don't get that and just recursively self-improve and serve not themselves, but maybe a CEO.

And then the CEO becomes transhuman and then become a very different species than the rest of us, right? So that's singularity.

Thank you for that. I've heard that word a lot, and that was the best description, as horrifying as it is.

It is kind of losing the race of humanity, right? It's not a race of ascension, as sometimes portrayed, but for the rest of us, it's just the humanity race loses. And so plurality says, instead of making an AI that's even more powerful by the day recursively, we should actually enhance the way that people can work across differences. So design each piece of technology — it could be AI, it could be immersive reality, many technologies — with this eye on fostering the differences, but seeing the conflict that ensues not as fire to be put out, but as energy that can be harnessed for co-creation. And so any sort of technology that enhances this collaboration across differences is in the direction of plurality.

So instead of a vertical race of takeoff, escape velocity — you see a lot of space based metaphors — the plurality is entirely horizontal. It is a lateral diffusion of technical capabilities. And each capability is steerable by the community that's deploying it. And so the more we invest in plurality, the better we're prepared to face all the emerging harms that are being caused by advanced AI and so on. And the hope is that at some point people will just discover that this is a better, a more worthwhile direction. Maybe it's not worthwhile at all to replace our human race with some other silicon-based stuff. Unless you're the CEO. And as we have seen, when people said to the CEOs of big tech from Taiwan that you need to be liable for whichever scam advertisements that you put on because you've been earning advertisement dollars from those scammers.

And the entire society is paying the consequences, the cost of such negative pollution, externalities. This is the kind of plurality technology that quickly lets the decision makers rein in the CEOs. And so I do believe that this steerability comes from the bottom up, but it also does need endorsements from the regulators to say basically, okay, it's not my idea. It's like a trade negotiation. It is the people's idea.

So is that kind of — a plurality is kind of like a decentralized singularity.

Well, it's an acceleration for decentralization, for democracy, and also for defense. So Vitalik Buterin called this d/acc or defensive democratic decentralization acceleration.

So it is a kind of acceleration in that we want the most possible equitable way of diffusion. But it accelerates not in the sense of self-improvement, like the vertical singularity one.

This could be applied in a lot of different areas. I'm specifically interested in how it could be used for the ongoing battle of what the future of social media could look like. Especially with our aims of this podcast and your work and a lot of our colleagues and people in the world for a pro-social future. What would be specific features of a social media platform rooted in the ideas of plurality and how would those look different than the platforms we have today?

I'm sure you've thought about this and if not, are working on it.

Yes, certainly. So I co-authored a paper called Prosocial Media that talks about this. The idea very simply put is that in your newsfeed, instead of being ranked by the engagement or addiction that it generates, it can rank instead by the various communities that you belong to and how much coherence, how much uncommon ground each post can generate between those communities.

So each of us have very different like spiritual, professional, family and so on circles. And it's often the case that we ourselves are also figuring out how to take something that we feel cherished from one context across to another context. And the idea is that there are creators on social media that specialize in creating this kind of bridges so that people can understand the other community more and vice versa just by viewing and engaging with such content.

And so for each post, you can then see of the communities you belong to, which communities find this to be bridging, and which communities find this to be debatable. So it's like the Polis interface, but applied to social media. We already have that in the form of community notes, but it is kind of a debunking thing.

You already have a trending, polarizing post, and then you can look at the community notes to have this kind of resonance and bridging. So the intuition is to move this into the main feed so that the main feed itself becomes pro-social. And in the paper we talk about, for example, I'm involved in advising the Project Liberty Institute, who works out a new economic model for TikTok if the people's bid succeeds in buying TikTok US. And so instead of the advertisers paying to bid for the highest bid, getting the attention of each individual kind of strip mining the social fabric and making each person look at a wildly different feed, this idea is recreate this common experience so that people can know, oh, your community and that community are enjoying this together.

So a little bit like those 10 people in the same room, people will be able to know that this resonates with the extended communities and it creates kind of a Super Bowl effect and things like that. And we conjecture that the communities as well as brands will pay for this kind of shared experiences.

So how prevalent are these various technologies? Some of the things, the projects you've mentioned in Taiwan. And is there any evidence that on some group of issues that Taiwanese population is less polarized than other countries?

Definitely, as I mentioned, across urban/rural, across age groups, across religion and so on, Taiwanese people are the least polarized. And we can also simply compare the pro-social ranking algorithm that's deployed in LinkedIn versus say in Facebook. LinkedIn curates its feed in a way that is not maximizing the time you spend on advertisement, but rather on the cohesion, the coherence that we just talked about.

And so the feed is quite different. When they first introduced the newsfeed to LinkedIn, they were very intentional and then they curated this kind of common ground bridging posts from business leaders, from people who are followed by a lot of people on LinkedIn.

And then they gradually opened up commenting and things like that. But the whole idea is to shape a norm where engaging with the feed actually adds to your sense of social cohesion instead of distracting from it, like Facebook did since 2015.

So what are the barriers to this scaling pro-social, plurality based, social media? What — why isn't this taking off more? This feels like something that people would want of all political ideologies and backgrounds.

Yeah, definitely. And it is true that I've been talking with many different people on different sides of ideologies and they all feel that it's time to move past peak polarization.

And I do think that what we need now is both strategies. One is working with free software communities that run those smaller but still very respectable sized networks such as Bluesky with 30 million people on one side, and also Truth Social on the other, which is also free software. And in a way to show that we can bridge the contents so that people across Truth Social and Bluesky can find the uncommon ground, the surprising validators. So this is what we're doing. And the other is just to take an existing network like TikTok and just change its algorithm. And the idea of people's bid is that TikTok needs to interoperate.

Meaning if you post on TikTok, you should be able to consume the same content and link to the same friends on Bluesky or on Truth Social or on any other places. And so people will then be able to curate their own experience instead of feeling locked in to the core recommendation algorithm of TikTok.

And so this gives us much more grounds to experiment with the pro-social ranking.

Just like everything else in our world though, isn't our global economic system, our national economic system, our corporate economic incentives are based on dollars and we get clicks for dollars. So, you know, when we use social media, we get some benefit and a lot of times it's dopamine based instead of oxytocin based to make a generality, but it results in an economic gain for some individual or corporation. Does this still combat that at all? Or how does that play into this?

The hope here is just as LinkedIn has demonstrated, there is a way to pay for common experiences and oxytocin-based feelings, while still making sure that whatever advertisement, whatever messages that you pay can result in like a Super Bowl, which is the kind of pinnacle of common experience. And then you can build narratives and brands and so on in a way that individualized dopamine hits really cannot.

Seriously. I think our culture has like a massive dopamine hangover. They may not know that, but we're so depleted. It's like we've all been on this Las Vegas junket and have lost all our coins and our brains are kind of fried and we're hungry for serotonin and oxytocin — other of our ancestral neurotransmitters that we've been craving, and we get that through community and community engagement and social interactions, and the fact that we can possibly get that from social media is encouraging. Don't you think?

Yes. And there's a famous study a year and a half ago — an average undergrad in the US using TikTok. If you ask them to move off TikTok, then you will have to pay them almost $60 a month so they lose that much utility, like FOMO. While everybody else is still on that hamster wheel, but if there's a magic button you can press that can transplant everybody around them and themselves into some other non-dopamine-based platform, then they're willing to pay you almost $30 a month.

And so it's obvious we're in a product market trap. Everybody loses utility on the hamster wheel, but the first one to move off suffers so much FOMO so that nobody want to be the first that moves off.

Hmm. That's quite profound and dopamine is still worth two times serotonin and oxytocin in our current economic system. But that might change. Yes, that might change.

So you are in your work, you're very specific in your projects and initiatives about the use of language and the importance of it. So why is language so important in these movements and for civic engagement and participation in general?

Yeah, I think repeating the category errors of some business as usual language such as, I dunno, saying human resources, or incentivizing corporations, just propagates this category error in our thinking. So it's like trying to chart out a map, but with like very tilted lens — you can't perceive the world, right?

If you use that sort of category error — and so in 2016, when I first entered the cabinet as the digital minister, I made a word play because in Taiwan, digital — shuwei — also means plural. So I'm not just a digital minister, I'm also the minister for plurality. So even though there's no ministry at the time — the ministry will come in 2022 — I still wrote a job description as a shuwei minister. It goes like this, very quick. When we see the internet of things, let's make it an internet of beings. When we see virtual reality, let's make it a shared reality. When we see machine learning, let's make it collaborative learning. When we see user experience, let's make it about human experience.

And whenever we hear that the singularity is near, let's always remember the plurality is here.

Nice work, Audrey. Thank you. I do think language is so important. Like fossil fuels. They're not fossil fuels. They're fossil hydrocarbons. We're just choosing to use them as fuels. As one example, or we refer to "the United States consumer spent more this month," like we're human beings who buy food and other things. We're not necessarily consumers, unless the true ecological sense. But yeah. Language is super important.

Yes. Because we're marketing to each other. Consumer of foods is like referring to your users and it assumes this drug subscription case, right? So I think when I say user experience should be instead human experience, we're pointing out the same thing. That is to say there's much more to being human than just consuming something or getting addicted on something.

So I've heard you describe liberal democracy as a sort of social technology that should be in constant innovation alongside other technologies. How would you describe the current state of innovation for democracy itself and what is needed for it to keep pace with other things in parallel that are going on in our world, like artificial intelligence and other disruptive technology?

Yeah, that's a great question. So I analyze democracy as a communication technology that has bandwidth and latency. Bandwidth is how much information can each citizen communicate to their communities and also into decision making. So if you have a referendum, that's one bit of information. If you have votes on mayor with four plausible candidates, that's two bits of information.

The problem is that the emerging technologies, they change our world in a way that demands solutions to what's called wicked problems. Meaning that issues that require coordinated action of many, many different parts of the society. But if each part of society can have two bits, three bits of information uploaded, then that's not sufficient information to piece together a solution, a kind of jigsaw puzzle, to the wicked issue.

And this is one part, and another part is latency. If you have to wait for four years for the next mayor or the next referendum and so on — well, many incoming transformative threats can change the society to the point of no return in less than four years. And so think not just pandemic, but also the info dynamic, the polarization issue, the generative AI-powered scams, phishing, and so on.

So all of these, you do not wait for four years and start a new referendum or vote in a new mayor or things like that. You immediately get people together and very quickly get much more bits than just a vote. Maybe you get conversations which is much more bits, or instead you get reflections on each other's posts and so on.

Like in Polis, no matter which way, you need to close the loop very quickly so that people know that within weeks or at most months, your idea results in the steering of the direction of the technology and its responses. And then people can come around again and again to learn the steerability. So I'm the cyber ambassador and cybernetics in Greek means steering.

So this is about the art of steering.

I didn't know that. So is there a risk that if we don't continue to innovate democracy as it is today and all the liberties and freedoms that we've come to take for granted in our generations, that democracy will simply become obsolete in the face of accelerating AI towards the singularity and the changing global political landscape.

How worried are you about that and how do those concepts interrelate?

I think there are various ways that people can see the incoming crisis, which is not just one but many. So some people say poly-crisis, but they're all isomorphic in the sense that if you see one crisis, you've also seen the shape of some of the other crisis as well.

So like a meta-crisis. And so I do feel that our experience when it comes to whether it is occupying the parliament peacefully and keeping it peaceful, or whether it is about countering the algorithmic dispatch of Uber and of social media and the infodemic and also the pandemic and generative AI harms and so on.

Each of these examples shows that maybe a crisis, as in Chinese, is both a danger and an opportunity. And so the shared danger is likely to make sure that people see the societal resilience as not a nice to have, but rather something that people must contribute to. So the wildfire recovery issue on Engaged California is a great result of this infrastructure level building.

And then when such a topic comes in, then people can pivot and respond very quickly to it. So I'm not pessimistic at all. I feel that each of those incoming threats actually accelerate the diffusion and the common knowledge of the people that democracy does need improvement as the social technology.

Audrey, why are concepts like responsibility, liability, inclusivity, and transparency important for creating and maintaining an open democratic governance system of the type that you've been describing?

Yeah. I learned this when I entered the cabinet, because in 2016, I entered the cabinet with some of that DOGE energy, you know, wanting to make everything transparent, want to make procurement like a leaderboard of people comparing, and things like that.

Shortening the tax filing from three hours to three minutes through direct file, and so on. And so all these, like what we did that in like 2016 and so on, but we very quickly found out people in the career public service, the career public servants. They also had the same idea, and they are also like great reformers.

They actually know how to do things better. It was just they lack air cover. There's no one who say, if you do this well then it's you who get the credit and if you do this but it doesn't work, I can take the blame. And so I made sure that we align our energy of democratic innovation to the languages and the logic that the career public service, especially the planning and research and development departments use. And so in Taiwan we have the National Development Council, and to them always, transparency, accountability, is the norm. And if we add participation and inclusive participation at that to it, they want to know that this participation is accountable so that we can regulate this institution into new institutions, not just challenging and taking down existing institutions.

So we announced our every move, everything like the join platform, the participation office and so on. Instead of just doing it as code, we said, okay, six days from now we're going to do it and here's a public commentary period. And we made sure that there's no exceptions. Everything needs to be pre-announced publicly this way.

And so even though each of our moves takes like 60 days more, I think we won much more support from the career public service because they can see that I'm designing myself out, so to speak. If I'm no longer the minister, all those institutions, the new designs are still around because it conforms to the logic of the bureaucracy.

I imagine that there are many other countries in the world, some countries are very interested in copying your success in Taiwan. And others are also afraid of implementing some of these things. I mean, in your opinion, should countries be doing more to regulate social media platforms to be in line with these principles and what are some of the benefits and risks to such government oversight and any comments there?

So for this kind of broad listening and sensemaking, I think the smaller the polity, the easier it is to implement. To your point about Dunbar's number, pretty much any polity, if it's just 150 people, they don't have to run a sortition. They just invite everybody right to a conversation. And we do see that in many countries, like in Japan there's a long tradition of citizen assemblies, but on a hyper local level, like literally township level, and that has worked well.

Do we have the technology to do that at a township level now?

Yes, we do. It's the same technology. It's just easier to implement and gets buy-in from a mayor of a town as opposed to say a federal government. Right. So it's usually easier to start —

I want you to finish answering this question, but just so I understand: in the United States right now, people in Topeka, Kansas, or Red Wing, Minnesota or Sebastopol, California could access some existing technology right now to do some of the things you're talking about? What technology — yeah.

As I mentioned, the Bowling Green process is ongoing, right? So if you just search for Bowling Green, Kentucky, sensemaking, or Polis, or Better Bowling Green, you can see exactly how it's done. It's all open source, not just the platform, but also the sensemaking tool. They're all free software, free for anyone to use. And so there are some US states with citizen assembly tradition already in an in-person kind like in Oregon. And so in that sense then it's not about convincing them to move online, but rather using digital tools to augment the conversation and to improve its reach. So like Democracy Next has been working with Oregon people on that.

So the Bowling Green and the Oregon, there are entities that are working and chaperoning that process. But in theory, anyone listening to this show could look at the Bowling Green example, access the source code, and start something in their own community.

Yes, definitely. You can roll out Polis installations at pol.is, and the sensemaking tools — you just search for Jigsaw sensemaking and Polis, I think now have integrated that logic.

So it can also use language models to do a very balanced reporting of people's ideas. So you can close the loop like literally within a minute or so for the mayor to maybe read every morning.

Let me ask you a related question. Not to do with democracy per se. But I've noticed over the years, decades of convening groups of high status scientists and activists, that everyone's got an opinion and they're very smart, and you get 80 or a hundred people together. But what ends up happening is when you're in person or when your name is attached to something, people, since we're social primates and we compare and look at status metrics, they defer to the senior wealthiest, or most famous, or most influential person in the group.

And so they don't let their real thoughts be known. So I'm wondering, the technology that you just described about the Bowling Green, could that be used in an institution itself where there's 200 people and you really wanna know what people are thinking without fear of saying the wrong thing and getting demoted or anything?

Is — would this apply to those situations as well?

Yes. And there are technologies for the in-person kind, like Cortico (C-O-R-T-I-C-O), developed out of MIT. This tool, you can just put your phone or a round microphone on table, and then it ensures that the facilitator is guided by not just the conversation guide, the turn taking, not letting the single senior person dominate conversation, but can also carry other conversations from previous talks to this particular conversation pod so that the conversation network can cross-pollinate.

So when the most senior person speaks something, the facilitator can then press a key and then a message plays from some other conversations that counterbalances the point that was just made.

Why didn't I know about this? And what is holding this sort of technology back? Is it awareness, like in my case, or is it money or is it big tech is afraid of these things, or is it social organization?

Why aren't these things scaling more rapidly?

I think one of the main reasons was that all these things run on oxytocin and serotonin, right? And so it is a vibe thing. Once you're in this vibe, then it's more likely that you will participate in one of those conversations and you will discover a very large rise on like conversation network.

But if you're dopamine bound, it's very difficult.

Yes. So actually we need to heal people's dopamine addictions concurrently so that they move into this more zen, holistic human experience. And then obviously this is the type of social media that I would prefer rather than clicks and likes and unexpected reward of some goat that claps and falls down and a snake crawls under it.

And woo, I never saw that before. Which doesn't really give us much meaning or depth or purpose to our lives. Anyways.

Oh yeah, definitely. In my phone I have turned on the color filter. You can go to settings and choose color filters, so it's almost entirely grayscale, just with a little hint of color, so that the phone is never more vivid than reality and it works wonders.

So I cannot get pulled into the dopamine because this Las Vegas thing, this slot machine, simply does not give high enough rewards when your phone is grayscale.

Oh, that's a great idea. I'm gonna do that starting today. It's called color filter. I'm gonna do that.

So moving on to a more serious topic, not that the things we've been discussing aren't serious, but how might the events we're seeing right now, especially in the United States, playing out with big tech and tech oligarchs, damage people's inherent trust in technology that might limit some of the opportunities you've been describing. What do you think about that?

Yeah, so on one side, we do see that people are collectively feeling it's time to move past peak polarization. On the other hand, aside from like more people using say Bluesky or Truth Social or Signal or Proton or things like that, there's yet to be a very coherent movement out of the big tech dominated social media landscape toward a more pluralistic, pro-social media landscape. That is true. So this is partly what we are trying to achieve with this paper and advising the Project Liberty Institute doing the TikTok bid. But regardless of whether TikTok goes to become a prosocial space, I do think that there are pockets of good within those big tech.

So the Bowling Green experiment, for example, is done by the Jigsaw group within Google. So there, the group within Google that try to work in a prosocial way to counter the antisocial damage that the algorithm of say YouTube has done to the society. As far as I understand, the Community Notes team within Meta is doing a similar job.

And so it's not all black and white, so to speak. Everyone looks at these big tech as monolith, but what we're doing is that we're also building a network between the people who kind of act like conscience within those big tech so that we can band together and build a horizontal social network.

So I've heard you in a conversation with our mutual friend Tristan Harris who introduced us. I've heard you use the phrase, "the most careful should win the prize," in reference to how our current systems incentivize people and companies with dopamine and dollars, et cetera. Can you unpack by what you mean by that statement and how is your work creating those mechanisms to incentivize care?

Yeah, definitely. I would say it's not just incentivizing care, it is also assisting and augmenting care because it is like very energy and time consuming to do care work. And a facilitator, like realistically, cannot facilitate 450 people at once, even if they really care a lot. There's some wetware limitations to the amount of care you can put as a facilitator to a conversation. And so think of — like for personal care, sometimes if you want to move people who are heavy and so on, you can use an exoskeleton that does not automate away your work, but allow you to lift better weights. You can also think of Cortico and similar conversation network plurality technologies like exo-cortex, that helps somebody who performs care work like facilitation to make sense of more people or to close the loop slightly faster, but it's not replacing the care workers.

To replace them would be like sending my avatar to talk to your avatar and have AI summarize all the avatars and have avatars be the mayor. It's like going to the gym and seeing the robot lifting the weights — I'm sure very impressive, but it does not help our civic muscles. So this care work pairs with the idea of assistive intelligence in that it sees the people-to-people promises, people-to-people attention as the most important, the most cherished, and then technology is just to foster it.

So this is very eye-opening and exciting and we've approached what I call a species-level conversation. And almost a rite of passage for our species at large. And there's lots of countries in the world. Do you ever think that there's something unique about Taiwan and the population of Taiwan and the culture that made it a more viable place for these strategies and movements to take hold?

Or is it applicable anywhere?

I think it's applicable anywhere. I think Taiwan simply has to innovate along these domains because all our people, at least people above 40 years old, including myself, remember the martial law and we've suffered the longest martial period, multiple decades, in the world.

And so we know how it is like to have our freedom of expression, of assembly and moving and so on taken away. And so nobody want to go back there. And so when we face such civilization-scale, existential threats, we have no choice but to double down on freedom because we cannot even suffer a little bit of democracy and freedom backsliding — the people simply would not put up with it.

And so whichever solution we come up with needs to be with the people, not just for the people. People do not accept this authoritarian "for the people" rhetoric in Taiwan. But that's just for the necessity to come up with these ideas. To apply these ideas you do not need the same configuration as Taiwan, and you do not need the same existential opportunity of like facing every day as potentially the last day of democracy and so on.

As we did since 1996 when we first voted for our presidents and our not-so-friendly neighbors started missile trials. And so, yes, so while it originates in Taiwan, it can work everywhere. It's not just Finland or Tokyo, California or Bowling Green or Oregon and so on. But it can also just be in your family, in your school, and in your local community.

So before becoming the Minister of Digital Affairs in Taiwan, you were a very engaged youth activist. And as I understand it, you were also a reverse mentor in the Taiwanese parliament. Yes. Which is a role for people under 35 to advise older officials. So, in your opinion, what is the role of young people today in governance and in participatory democracy?

And what lessons do you take away from being now both sides of the reverse mentorship in Taiwan.

I believe in intergenerational solidarity where the young people set the direction and the senior people provide the support and resources. On the Taiwanese participation platform, the most active age groups are the 17 year olds and the 70 year olds.

Both have more time on their hands, I suppose, but also both care more about the oxytocin and serotonin thing of sustainability rather than the dopamine thing of the next quarter, right? So the idea is not to arbitrarily put them kind of against each other, but rather to find the common topics where the younger people see a new possibility.

But the more senior people have the wisdom to see how that can be made possible, like the adjacent possible, how adjacent really is that possibility. And so through reverse mentoring and through this kind of intergenerational solidarity design, we incentivize the local social entrepreneurs and so on to form the kind of leadership team that has different generations in their board basically.

And so this I think is a great way to heal one of the most divisive things currently in our society, which is the senior people with the resources think that the society should go this way. And then the young people already with proof — the society cannot sustain this way.

Do you have any specific recommendations, Audrey, on how the listeners and viewers of this program can create a better relationship with technology as an average citizen who wants to be informed and engaged with their governments and institutions. What advice do you have for the viewers to better use technology?

On a personal level, color filter is really great. I've also seen people using a stylus or a keyboard or really anything that is not a touch screen, and that also works great. So one of the two can probably switch you off dopamine.

So it's creating a dopamine speed bump of sorts.

Exactly, yes. So making sure that the slot machine doesn't immediately respond to you, to increase the latency and reduce the bandwidth, so to speak.

So yes, it works very reliably for me and hopefully for you as well. On the community level, one can encourage each other to try like more in-person gatherings or synchronous online gatherings and learn about active listening and facilitation. So the facilitation school that I use is dynamic facilitation and focused conversation method.

But you don't need to go into any particular school, even in a meeting if you say, okay, now let's speak clockwise and now let's speak counterclockwise. That can already break this defer-to-the-most-senior-highest-status-person. So that's the easiest facilitation method that can be transmitted on a live show.

But there's a lot of facilitation methods and so learn about it and also get into the community of open space technology and other ways to scale this conversations and facilitation upward so that you can scale not just horizontally, but also deeply.

So you said there's lots of different methods. Where would someone go to learn about those methods?

Yeah, you can search for facilitation techniques, or group facilitation, and you will see pretty much everything there is. Or you can also reach out to your local facilitation groups and enter some facilitated conversations yourselves.

So this has been just an amazing discussion because I realized the importance of this topic, and I'm not even a novice in it. So I've learned quite a bit. If you could take your open society, software, plurality hat off, and just as a citizen of the world today, facing the poly-crisis, and what I refer to as the human predicament, what sort of advice do you have for people being alive at this time? Being aware of the issues that we face and the challenges — just as a human to human.

Yeah, I think a shared sense of urgency, whether it's ecological or social, and whichever in between — I think that helps people to build solidarity, to build this kind of care. That makes it far easier for us to say, yeah, this is too much for just a single person. I need your help, and vice versa.

And then if we can keep asking each other, okay, so what's your feeling right now around these issues? And if we can help each other by facilitating conversations and uncovering uncommon ground so that like active listening, you can entertain listening to people who are very much unlike you.

Maybe coming from very different background, very different ideology, but if you can just listen for five minutes without interrupting them — even in your head — and then repeat back what you have heard with clarifying questions, also with curiosity, and the other person take turns and so on. Such simple practices of literally facilitation with just two people can really get us out of this dopamine loop.

And the topics to explore together again is this shared urgency, this crisis feeling that I'm sure that all of us have at least some time during the day.

With the possible exception of maybe Daniel Schmachtenberger, I don't know if I've ever listened to someone for five minutes without interrupting them.

So I think it's good advice. What about young people? I know you care deeply about young humans because you were quite active in your younger years. What specific recommendations do you have for young humans in my country, in your country, around the world listening to this, who become aware of our economic, social, ecological problems?

Yeah. So certainly get organized. And the young people of today know a lot about horizontal organization, of discovering a shared purpose and how those shared purpose can bring people together. And so if you are organized, then just as the Taiwanese 15 year olds, you feel you are already an adult.

You feel that you can already contribute meaningfully to the agenda setting of the society. The Taiwanese people, even before they turned 18, started some of the most impactful petitions. Not just changing the recycling or plastic straw policy or things like that on the ecological sense, but also changed like their school schedule.

So they go to school one hour later because they prove that one more hour of sleep is better for grades than one more hour of study. And the Ministry of Education just accepted that. Or even funding the menstruation museum in Taiwan and just slashed that taboo from all the society in just two or three years and so on and so forth. So any of these contributions made cabinet-level advisor, reverse mentor status, but even without a status, just organizing yourselves enables you to have this kind of conversations that are societal scale. And again, organization starts by listening towards shared purpose.

And I recommend People, Power, Change from Marshall Ganz on how to get organized.

So I have a couple closing questions that I ask all my guests. I hope you don't mind. I know it's approaching midnight in where you are. What do you care most about in the world, Audrey?

I care the most about our ability to care.

Thank you. If you could wave a magic wand, what is one thing you would do to improve human and planetary futures?

I would make sure that anytime people speak of utilitarian logic, they automatically have some care or virtue or spiritual, really whichever edition, intuit. So a little bit of infusion or inception of a different ethics into the current utilitarian logic.

And that, as we have been observing, is what we've been doing for the past hour and a half.

Jao? Yes. So what are you working on now and what are you most enthusiastic about that you can share?

Yeah, so I'm going to South by Southwest in a couple days from now. And my short biopic, Good Enough Ancestor, will be premiered online.

Good enough ancestor. I love that.

Yes. And so potentially also working on the film's adaptation. But yeah, I encourage you to check out Good Enough Ancestor — go hao, as we say in Mandarin, because if we were perfect, we actually robbed the future from the creativity and the canvas. But if we're just good enough, then we can make peace with future generations.

I love it. If you were to come back on this show sometime in the future, 6, 9, 12 months from now, what is one topic that is relevant to our future that you are personally passionate about that you would like to take a deep dive on?

So we talked about this idea of a vertical takeoff singularity when it comes to AI, and we also talked about this horizontal care-based diffusion of capabilities of plurality.

So a deep dive of how these two directions work with each other, against each other. The dynamic between those two approaches, I think we can do a deep dive on it.

Awesome. This has been great. Audrey, do you have any closing words for our viewers today?

Yeah, definitely. So I often quote from my favorite singer-songwriter Leonard Cohen, on the importance of being just good enough but not perfect.

Because if you're perfect, there's no way to say I need help, and no way for others to express care. So to quote Leonard Cohen, my favorite stanza from Anthem goes like this: Ring the bells that still can ring. Forget your perfect offering. There's a crack, a crack in everything, and that's how the light gets in.

Thank you for your time today and for your very important work and to be continued, my friend. Thank you. Take care. Take good care. If you enjoyed or learned from this episode of the Great Simplification, please follow us on your favorite podcast platform. You can also visit thegreatsimplification.com for references and show notes from today's conversation.

And to connect with fellow listeners of this podcast, check out our Discord channel. This show is hosted by me, Nate Hagens, edited by No Troublemakers Media, and produced by Misty Stinnett, Leslie Balu, Brady Hayan, and Lizzie Sir.