• I want to welcome our first guest of the evening to Loomer Unleashed , Audrey Tang, who I believe is joining us from Taiwan or perhaps still Europe, but I know it’s somewhere overseas. So, Audrey, thank you so much for joining me on my program tonight. How are you doing?

  • Hello. I’m in Taiwan. It’s half past ten, and I’m really happy to share with you some thoughts around the freedom of speech. We, of course, support robust and uncensored discourse from Taiwan because we suffered from martial law, and we are also, of course, resisting authoritarian censorship.

  • Absolutely. So you are Taiwan’s cyber ambassador, and you are also Taiwan’s first ever digital affairs minister, from August 2022 to February 2024. I think it’s an interesting concept because only now, here in the United States, do we have somebody who is fulfilling a similar role. I guess people would describe you as the tech support for Taiwan. And now we see this same kind of position being filled by Elon Musk in what is, you know, a self-proclaimed position of White House Tech Support or National Tech Support. Can you explain to the viewers what you do in your position as Taiwan’s Cyber Ambassador and what your position as the digital affairs minister involves? Because I think that’s a unique position to Taiwan that we don’t really see in other places around the world.

  • Certainly. So I joined the cabinet as a special adviser in 2014, actually. That was more than a decade ago, and it followed an event in Taiwan’s parliament at the time. Many people who were unhappy with a trade deal with Beijing—one that would have invited them into our media, into our communication network, and so on—took the matter into our own hands. We peacefully occupied the parliament for three weeks, calling for reform. We called ourselves “demonstrators,” not just “protesters,” because we wanted to show that it is possible—transparently and without censorship—to have an all-of-society conversation around these matters.

  • At the end of that year, I was invited to basically play that role but online, building the systems where people can have unfettered access and conversations without getting censored by social media—Big Tech at the time—so that we could determine policies together. We not only resolved issues around Uber, workers’ rights, and so on, but we also tackled disinformation attacks and polarization attacks coming from our authoritarian neighbor. We shortened the tax filing process from three hours to three minutes in 2016–2017, again tapping into the wisdom of the crowd. Anyone could contribute ideas, and we also published transparently the government’s spending and procurement so that people could comment freely on it.

  • Very interesting. And, of course, that’s a position that I wish we would have had here in the United States years ago, because we’ve seen now, finally, there seems to be a focus on free speech—especially now that we’re seeing the political vindication of President Trump with his reelection after we witnessed the shocking censorship during a presidential election in 2020, and the shocking act of deplatforming a sitting United States president by the Big Tech social media companies, of course, in 2021.

  • I’d like to see that there’s now a renewed focus on free speech and the creation of platforms, but I still think there are challenges—not just in Taiwan, not just in other parts of the world. We saw this discussed at length, and it became a point of controversy during the Munich Security Council, given the fact that J.D. Vance put most of the European attendees on full blast on the international stage there, condemning them by name for individual acts of censorship and authoritarianism. I know that one of the things you’re most focused on—and you recently released a paper on this, and you have a film coming out about this as well—is how these social media platforms can help support communities through open-source technology as a way to combat not only censorship but also authoritarianism.

  • So, given that you not only attended the Munich Security Conference but you also spoke on behalf of Taiwan, what were your thoughts on J.D. Vance’s speech? What was the reaction, as an attendee? How did you feel in your position as the cyber ambassador? And what impact do you think it had on the global stage as it relates to free speech and anti-censorship policies?

  • I spoke at a Munich Cyber Security Conference, which started one day before the main Security Conference. But I did, of course, like everybody else, listen to your vice president’s speech.

  • I think in Taiwan, because we’re the freest in all of Asia in terms of internet freedom—including freedom of expression and so on—we do not believe there is a trade-off to be made between security on one side and freedom on the other. In many jurisdictions, they believe that in order to have safety or security, you have to give in a little bit on freedom, like censoring some speech in the name of safety and security.

  • But in Taiwan, we have found that if you make sure the communities themselves do the moderation work instead of delegating it to experts or some third-party people controlled by Big Tech, it results in much more robust resilience. Instead of not seeing the attack or polarization, people learn to identify these as robots, trolls, and things like that, but you don’t need to censor them. People can have a shared understanding of what is actually going on and then contribute ideas and creative solutions—turning conflicts not into fires to be put out but rather co-creative sources of energy. That is our position, and it’s the position I shared at the cybersecurity conference.

  • To your other question about the role social media plays today: I’m advising the Project Liberty Institute, and we’re redesigning ideas around communities online so that we can build upon what’s called open source or free software, decentralized systems. If you’re not happy with any particular policy that a social media platform runs, you can keep your content, keep your relationships, keep your social graph, and just migrate to some other community, where you can then have your own conversation and moderation rules. We’re also advising the team about “The People’s Bid,” trying to buy TikTok with Frank McCourt and Kevin O’Leary leading the bid, to turn TikTok into something like this—where moderation rules aren’t determined by a far-off entity but rather by individual communities.

  • So how would your system differ from a system like Facebook’s fact-checkers or what is now described as the X Community Notes system? Some people have argued—and we’ve seen this firsthand—how these systems, while they may have come with good intentions, have also been weaponized. We’ve seen the way Facebook (or Meta) has created partisan advisory boards full of mostly leftist activists, for example, that weaponized the fact-checking system and used it to impose fake science or revisionist history.

  • Some people have also made the same argument about the Community Notes system on X, arguing that there’s only a select few people who can actually participate in it, meaning it’s not as open-source or as decentralized or as open to the public as it may portray itself to be. So how would you say that your system of doing things is different from Meta’s fact-checkers or X’s Community Notes?

  • In two very important ways. One is that we are replacing top-down decisions with a community-based, bottom-up approach. Instead of a secret panel where you really do not know who they are deciding who to silence, the idea is that each community can decide what content they value. So this is the first thing: decentralizing the decision-making, not just the technical apparatus.

  • The second thing is that it is based on more speech rather than less speech. In our system, you simply see more labels—what ideas are gaining common ground in your community and what ideas remain divisive, and how they are debated. But these labels are not taking anything down; they just provide more context about who supports it and why. That’s the opposite of censorship. We’re not taking anything down; we’re adding in valuable context.

  • And so, given what you just said, how was your own reaction to J.D. Vance’s speech? Did you find yourself agreeing with most of what he said, or did you find yourself disagreeing with elements of it? Or do you agree that we’ve seen a degradation—an assault—on free speech and freedom of expression in Europe?

  • Well, the Romania case that he shared with the audience is also a case I often bring up, which is the need to strengthen democracy so it is not easily overtaken or overwhelmed by bots or polarization attacks from authoritarian regimes. On that message, I think we’re very much in sync.

  • Many of the European audience members are basically trying to come up with what we have already discovered in Taiwan, which is a more robust, more open, and transparent way of making sure people understand what’s going on. Additional context is actually better in terms of defending against such polarization and infiltration or troll attacks than misclassifying some humans as trolls and therefore silencing their speech. Because in doing so, you weaken the social fabric, you weaken the strength of different people with different opinions.

  • I think J.D. Vance’s speech outlines a direction. The technological idea here is that we need to ensure that communities have moderation in their own hands, instead of just making sure one single state actor or one single Big Tech actor does that. As long as it’s centralized in one decision-maker, this false trade-off will inevitably happen.

  • You mentioned the Sunflower Movement in Taiwan, and I wanted to get your take on this because what we saw here in the United States was a crackdown on free speech and a movement to increase censorship. Here in the United States, when we had our own January 6 protest at the United States Capitol—the left would argue that it was violent; the right would say that it was a peaceful protest, given the fact that nobody used any weapons, and they were just Trump supporters occupying what we call the “people’s house,” right, the U.S. Capitol, which is paid for by U.S. taxpayers. But that was the catalyst, the defining moment that really resulted in the deplatforming of Donald Trump and the creation of new pro-censorship policies by the Big Tech social media companies here in America, to not only silence world leaders but also silence half of the country and really crack down on political speech.

  • Did you see similar things happen when you had the occupation of your own government buildings in Taiwan during the Sunflower Movement? Because it’s interesting to see how, around the world, there have been protest movements where citizens have occupied political buildings, and yet the reactions have been so drastic. I mean, here in the United States, you had politicians like Kamala Harris actually saying that January 6 was worse than Pearl Harbor or worse than 9/11—when thousands of people lost their lives in acts of terrorism. So how would you compare and contrast something like Taiwan’s Sunflower Movement to the January 6 protest at the United States Capitol?

  • Well, back in 2014, I believe we were even more divided and angry. The approval rating at that time was 9% for the president. In a country of 24 million people, anything the president said was not agreed by 20 million people. One contrast I would draw is that we said from the beginning we were not just demanding something or protesting something; rather, we were moving from the demand side to the supply side—the tech support side—by demonstrating that it is possible for people to come up with better trade-deal policy ideas instead of just the one that was being rammed through. People could come up with better thoughts on how to protect our information ecosystem from invasions by communist authoritarian ideas, and so on.

  • By very quickly positioning ourselves as demonstrators, not just protesters, that is how the energy was different and resulted in the Sunflower Movement being one of the very rare “Occupy” actions that concluded with the Speaker of the Parliament basically saying, “Okay, the people’s ideas do have a point. Let’s accept their ideas.” That occupation converged instead of diverged over time.

  • You have a paper that you published— Understanding Over Engagement: Designing Platforms to Bring Us Together —where you talk about how we need to be focused on creating these platforms and supporting the creation of social media platforms to support communities and build up communities and freedom of speech but also to combat authoritarianism. Do you feel that now, more than ever—or perhaps we’re seeing a decrease in authoritarianism—does the rise of technology pose an increased threat of global authoritarianism, or do you see it as a decreased threat of global authoritarianism?

  • I think for authoritarians, their main idea really is to spread this narrative that freedom only leads to chaos, that freedom only leads to infighting, and that freedom never delivers. That meta-narrative is their justification for locking down, shutting down, taking down stuff. To combat that, we need to build a resilient society where, for example, if you see a post labeled that it’s shared ground among Christian conservatives, and then you see some very trollish attacks from maybe a bot, sowing discord. Nowadays, the tactics from authoritarians is no longer simply taking down things but rather populating it with even more messages that create this false sense of consensus—like, “Oh, there’s so much infighting and chaos.” With our system, you can see a very clear label that this is a shared ground…

  • …and that there are also different perspectives among libertarians, and so on.

  • Right. And I would agree with that because what we’re seeing now is echo chambers. Even though I would say X has…I mean, I’ve been censored on Twitter 1.0, and I’ve also been censored on Twitter 2.0. So I’ve been censored by Jack Dorsey, and I’ve been censored by Elon Musk. But one thing that you said in your paper that I found to be really interesting is: “With a deeply polarizing U.S. election fresh in mind, the need to redesign platforms that bridge divides has never been more urgent. In a paper we’ve just released, we offer a solution based on one that has already played a pivotal role in addressing similar problems in Taiwan and on X.”

  • You’re right. What’s happened now is that, even though I would say there’s more free speech on Twitter 2.0 than we saw on Twitter 1.0, we do have echo chambers. People on the left, because of the polarizing election and because Elon Musk gave $270 million to support the presidential election for Donald Trump, are now all flocking to Bluesky. If you’re a conservative like I am and you go on Bluesky, you’re barely going to have an audience because social media now has divided itself into these echo chambers. X, of course, picked up a lot of support in the aftermath of Meta censoring so many people during the election. But now you’re seeing this rebrand effort—whether it’s genuine or not—by Mark Zuckerberg to try to rebrand Meta.

  • You talk about how—here’s another quote: “Instead of amplifying posts that spread misinformation or fuel outrage and division, social media platforms should empower their users to promote content that they value and that brings their communities together. What if instead of leaving content moderation to censorious governments or the political whims of tech billionaires, platforms allowed users to provide context that fosters understanding and strengthens their communities?”

  • It’s an interesting concept because we have seen accusations from the left that Elon Musk amplifies content that fuels outrage or is rage-bait for the conservative movement. But then you see people say the opposite if you’re on— They said the same thing about Mark Zuckerberg and Facebook when Facebook was largely controlled by leftists. Do you find that if we were to have this content moderation system where it’s not up to tech billionaires or governments to decide terms of service, like we’re seeing with the European Union now in its attacks on free speech, that there would probably be less social animus, less hostility, and fewer political divisions if people felt like neither political ideology was being over-amplified? What’s your opinion on this? Because I think all of us at one time have found ourselves accusing Big Tech social media of being on the other side, and we’re seeing disclaimers. Certainly, pre–Twitter 2.0, I found myself accusing Big Tech of being more left-leaning, and even now, I find myself criticizing and often questioning the amount of power and influence tech billionaires have because I still find myself censored even though Trump is president.

  • Yeah. I have just registered on Truth Social; my first post was reposting you, and I do feel it has more of a community spirit where users feel they have more control and are less likely to abandon the platform when they don’t feel decision-making is censorious. That fosters genuine community conversations.

  • The main technical question we’re facing is whether individual platforms, such as Truth Social—which is, by the way, free software, open source—and Bluesky, also free software, open source, can build relevant bridges so it’s not just community notes (which already happens after something very polarizing or very angry has gone viral). How about community posts, not just community notes, where you can post in your own communities on either Truth Social or Bluesky or other open-source communities, and then also have an additional label that shows there is a common ground across all those different smaller platforms? I think that will give people much more control and much more understanding of each other, beyond the models prototyped by Meta or X at this point.

  • So what is the best way to do that? Because I know you talk about this idea—it’s a concept that sounds ideal, this idea that people would have a community partnership in deciding what is true information and what is disinformation. But as you just said, these systems that have been created by X or Meta are still not ideal because they still find themselves being influenced, in my opinion, by political ideology. I often find myself wondering, “Who the hell is in control of Community Notes or who’s in control of fact-checking?” because sometimes I see these Community Notes or fact checks and they are just as absurd as the claims being made.

  • Yeah. Over and over again, we’ve seen that in the advertisement-driven field model, only advertisers—especially very big advertisers—get to see the social fabric: the people who liked the same content, shared the same content, and so on. But that information isn’t made available to individuals; it’s just made available to the people who do individualized advertising. That derails us from a shared common experience even more because everybody sees different precision-targeted ads.

  • Part of the paper is to imagine a different funding model where communities can fund shared common experiences—so that people see shared posts that bring different communities together. In the example I just gave, if there’s a post or a piece of video that could explain the more conservative angle to libertarians, for instance, and for people who belong to both communities, that would be a way to bring them together. That’s what we call “prebunking.” Instead of debunking after the fact, we can have shared narratives before something fueled by hate or outrage happens. Again, it doesn’t require censorship at all; it just requires building more “bridge-making” content.

  • If people are interested in learning more about the People’s Bid, thepeoplesbid.com has ideas on how it could be applied to TikTok, if of course that People’s Bid won. For the underlying ideas, there’s also a book, Plurality.net , which I co-wrote with 60 other people, and it’s free software.

  • There you go. It’s in the public domain, so there’s no copyright—feel free to remake it as you want.

  • And you also have a film coming out as well in two weeks. What can you tell us about your film?

  • Sure. It’s called Good Enough Ancestor , and it tells the story of Taiwan’s democratization. I was born into a family of journalists—both of my parents are journalists—and they had to work within the martial law era’s rules, which were very censorious. By the time we truly democratized in 1996, when we had our first presidential election, it was already the time of browsers and the World Wide Web. So from the very beginning, internet freedom and our democratic freedom have been the same thing in Taiwan. We’ve innovated a lot on how to defend our democracy without sacrificing our freedom. Even though our own martial law era is over, we still have an authoritarian neighbor—Beijing—that wants to push this authoritarian agenda.

  • Good Enough Ancestor. We’re just getting that loaded up momentarily, and we’ll play it. Here’s another quote from your paper. You said, “Instead of removing or shadow-banning controversial content, platforms should empower participants to help offer meaningful context. This could include clearly showing which communities embrace certain views or watch certain videos, and labeling content as ‘shared ground’ when it reflects widely accepted perspectives, or ‘different perspectives’ when it’s more controversial.”

  • How would you—where would you draw the line? Because here in America, you have people who argue that certain content is hate speech, even though there’s no such thing as hate speech legally because there’s no hate speech exemption as it relates to the First Amendment of the U.S. Constitution. It’s very subjective, and it’s become very political. You have people who now have a distorted understanding of what the First Amendment means, and there’s this widely held view that there is such a thing as hate speech, even though, according to the Supreme Court, there’s no such thing. The First Amendment does not protect incitement to violence or terroristic threats, but where would you draw the line regarding your system of creating shared-ground policies for content moderation?

  • Would you allow for things that automatically get you banned on social media, like child porn or terrorist threats or support for groups like ISIS? Would this system allow for things like that to exist but with extreme social pressure so that the moderation signals to society that the content is wrong or shameful, or would this system, given that you’re aiming for total decentralization, completely ban or silence content like that?

  • Yeah. I was also in Paris for the AI Action Summit, where we launched the “ROOST,” the Robust Open Online Safety Tools. Part of the ROOST idea is that the defense against attacks that are clearly unlawful should be open source and decentralized as well. For example, child sexual abuse material is illegal to host. This is not a matter of lawful or awful speech; it’s basically illegal content. Currently, we rely on Big Tech to tell us whether material is the same as known child sexual abuse material or not. Smaller operators have to pay very large licensing fees. If there are errors in the judgment, there’s really no way to appeal outside of existing Big Tech procedures.

  • So the idea of ROOST is, again, decentralization, so that even small-time content hosts can very easily detect such unlawful content automatically. It doesn’t require a community vote. But if you want to tune or update it, you can—without violating an NDA with Big Tech or any law. I think free expression is required, but underneath that, we need robust online safety tools that detect unlawful—not just “awful”—content. Above that level, anything lawful should have both a fair support for free expression and a practical method for communities to own the moderation tools, so that algorithmic manipulation cannot create this false variety or fake polarization that wasn’t actually there. We don’t need to censor anything; we just need to share with communities a way to shape their own social media reality. By making sure there’s a “freedom first” way to label something as shared ground, people can have a conversation based on that. If something is truly divisive, we can represent balanced views rather than defaulting to one extreme take as if that’s all there is.

  • And you talk about a lot of this in your book, Plurality: The Future of Collaborative Technology and Democracy , so I would encourage people to check it out. We also have a trailer for your film, Good Enough Ancestor . Before we play the trailer, just tell the viewers what inspired you to make this film, when it comes out, and why people should watch it.

  • Sure. I think the message here is that technology must advance freedom—something we really believe in Taiwan. We believe that America can also amplify this vision and make it a reality, so that people do not need to trade off between freedom on one side and safety or security on the other, but rather use technology to enhance freedom in a way that puts the steering wheel, the reins, into the hands of individuals and communities.

  • Let me go ahead and play the trailer for the film.

  • Across the world, democracy is on a decline. Authoritarianism is on the rise. Global freedom has eroded for the 18th consecutive year.

  • Audrey Tang, what is your greatest hope for democracy across the globe?

  • “That people see democracy as a social technology—something that people can construct together, that they can improve in the here and now. We can code up political systems.”

  • “The Sunflower Movement was a real turning point for Taiwan’s democracy. When the Sunflower Movement happened, I said to my colleague, ‘I have to leave now. Democracy needs me.’”

  • “I brought a very long cable for internet connection. I livestreamed through Twitter. Everybody saw it is possible for democracy to evolve, to come up with novel policies simply by asking the people, ‘What should we do together?’”

  • “I was born with a heart defect, and one day I just fainted. I woke up in a hospital, and they told me the hole in my heart had grown. I could die. This awareness that everything can be reset tomorrow reminds me of Taiwanese democracy. We can lose our democracy with just a couple of missiles.”

  • “I think there is a lot that the world can learn from Taiwan. The U.S. is running on a 250-year-old operating system. There is a strong desire to move beyond this and re-form from first principles what democracy in the 21st century would actually look like.”

  • “We can make the government more open to the public. It’ll be easier for people to access the data, to analyze it. Transparency.”

  • “Audrey has created the opportunity to make Taiwan the most exciting constitutional democracy in the world.”

  • “I think it’s a very grave moment. China really wants to invade Taiwan. You’re sitting in a place that is the flashpoint if there’s going to be World War III. And yet, there’s calm in the eye of the hurricane. That calm, I think, emanates from all that Audrey’s done.”

  • “We can become co-creators of our democratic system, reconfiguring society in a way that is more transparent, more inclusive, more fair.”

  • That’s a very nice trailer. When is the official release date for people who want to watch it, and where can they watch it?

  • I believe it’s March 9, and I’ll be posting on Truth Social and X.com and Bluesky and the usual places.

  • Wonderful. One thing I want to ask you—and it’s a personal question, so you don’t have to answer, but hopefully you will—many of the social moderation efforts to censor content online have been driven by the woke left, at least here in America. Some of the groups behind a lot of this censorship have done so at the request of LGBTQ organizations. What I find to be rather interesting about you is that not only are you Taiwan’s first digital minister, but you’re also the world’s first transgender minister. Given the fact that so many in the LGBTQ community have pushed for censorship of language or content they believe is disrespectful to the trans movement or the LGBTQ movement, what kind of attack have you received personally from individuals in that community who may have accused you of fostering a community of hate simply because you seem to be pushing for a true free-speech absolutism—which also means allowing for people to say things that may offend you?

  • Well, I was born with the condition of very low testosterone, so it is true that I am somewhat in between, naturally. But I’ve never let that bother me. My pronouns are whatever; if people use any pronouns, in Taiwan we say tā/tā/tā, it doesn’t really matter. So in my mind, there really is no forced language change needed for addressing me. My pronouns are officially whatever.

  • To your more serious question, I do think absolute freedom of speech and mutual respect can coexist. The main problem in broadcast-based social media is that only the most viral ideas or “takes” go viral, and people do not actually think there’s another common-sense, middle-ground idea that could bridge different communities in mutual respect. Each side caricatures the other side as the worst version of itself, making it much harder to have common-sense conversations.

  • Do you ever find yourself receiving attacks for being a free speech absolutist or from within certain communities for advocating true free speech? Because we see a lot of people out there who claim they are for free speech, but there’s always a caveat. Sometimes they censor you if you offend them. We’re told X is truly a free speech platform, but many people have found themselves censored by Elon Musk simply because they got into a heated argument with him about H-1B visas or had a disagreement with him.

  • So while many claim they are 100% pro–free speech, there’s often a disclaimer. I just wondered if you’ve been attacked personally for promoting free speech absolutism and total decentralization in this approach to shared-ground moderation, given your own personal background.

  • Yeah, as we speak, there’s a robust conversation going on in the Rumble chatroom, which I’m reading, and I think it’s really good that we have ways to directly address people’s questions. When people say—

  • Oh yeah, we can go to the chat too. Let’s go to the chat, and we can take Audrey questions.

  • Yes, people are commenting, for example, that my pronouns are “whatever,” so they find it interesting. There’s someone asking about my support for safe spaces for women. Of course, I’ve always supported that.

  • I think it’s quite important we have direct conversation, because only direct conversations can build true bridges. If people only read one snapshot or one clip from the other side, awful-but-lawful speech may look like it represents an entire community, but there’s no real dialogue.

  • My direct answer is that free speech is not just speech to broadcast; it’s also about broad listening. It’s the freedom to listen across differences and then respond in real time. If we have that freedom, it’s much harder to caricature the other side, and it’s basically easier to resolve tensions that come from people’s ideologies or from intolerance. So I think the fact we have the chat here is proof that absolute free speech and real-time response actually work better when we want to communicate on these things.

  • Yeah, no, absolutely. I do find it admirable because you see a lot of big talk from people who say they truly support free speech, but as soon as you say something that goes against their personal views or their personal preference or political ideology, you learn very quickly that they’re not. I want to go to the chat and take a few questions, because Audrey did say she wants to answer a lot of questions. Let’s scroll up…

  • EarlyGX said, “Thank you, Audrey, for finding solutions to end censorship.” Let’s see if there are questions. There’s a debate about whether AI needs to be regulated. Do you support the regulation of AI, and given the growing influence of AI—and we see the Trump administration’s AI department now focusing within the Trump administration to become the world leader in AI and also cryptocurrency—what are your views on the growing advancements in AI technology, and how do you feel about AI regulation?

  • Yeah. Again, it’s similar to asking about community moderation. My answer is the same: if AI is in the hands of the people and individual communities and families, that’s a good thing because people can steer it however they want. If it’s just in the hands of one or two people, that may not be the best idea because then it ends up being one or two people determining the worldview of the entire society—aligning society to the logic of the AI’s creator, just like in social media algorithms, instead of aligning the tool to people’s uses.

  • As I mentioned, the ROOST (Robust Open Online Safety Tools) is a set of AI models that safeguard against illegal material and so on, but the great thing about open source and free software is that people can tune it to their community’s liking. If your community has different standards for moderation, the AI should be tuned, tamed, and domesticated by your community, not by some faraway place—either in Silicon Valley or Beijing—where you have no control. AI becomes assistive augmentation only if we can control its tuning and domestication so that it suits our own purposes, not some abstract Big Tech purpose.

  • Let’s go to the chat again. A couple more questions. Flydog777 said, “How would we legislate or mandate a law to embolden this?” And Marnie says, “What are steps we can take to support this?”

  • Those are great questions. In Taiwan, for example, we sent 200,000 SMS text messages from the official 111 number to random Taiwanese citizens, asking what to do about trolls, deepfakes, and attacks on information integrity online. People shared their ideas. Then we basically asked people to volunteer like a jury. Four hundred and fifty people, randomly selected but statistically representative of Taiwan, came into online conversations in rooms of ten people each—45 rooms total—and had these discussions. People definitely said, for example, “You should not be able to use money to buy trollish influence that impersonates celebrities and gets them to say words they never said.” Those deepfake ads need to go, but we need to do that without censoring anyone.

  • So how do we do that? The people’s idea was that if it’s an advertisement claiming it’s from Audrey Tang, then it needs my digital signature endorsing it. That’s it. If Facebook, for example, does not secure my signature and somebody gets scammed out of two million dollars, then starting this year, Facebook in Taiwan is liable for that two million. Fraud, scams, and financial damage can be combated in a way that doesn’t involve censorship but instead requires people paying for the ad to sign it digitally. That’s the Anti-Fraud Act, a very sensible piece of legislation I hope other jurisdictions consider. Also, the idea of finding pro-freedom, non-censorship solutions is something governments around the world can engage their people in.

  • As for concrete ways to support this, the ideas of decentralized social media are still pretty new. Spreading that idea is one step. You can find many policy ideas and solutions in the Plurality.net book.

  • So the concept of a digital signature would only then apply to advertisements to prevent scams. But that doesn’t necessarily address what some would call trolls, because a lot of memes could be considered trolling. How do you address that without censorship? Some people would say a meme is a troll, while others say a troll is a meme. It’s about interpretation or perspective. If you’re on the receiving end of the meme or just an observer, some people would say actual commentary or satire or artistic memes are just expression. Others would see it differently.

  • Yeah. To go back to the Romania example your vice president brought up in Munich: I think what happened there is not a simple meme but rather a way for one person to control thousands of accounts, as if there are thousands of different people with different political views “trolling” together. The idea here is not to censor content but rather focus on the actor. If there’s a foreign actor pretending to be 5,000 Americans, that should be detected as soon as possible. And you can do it in a way that does not compromise privacy at all.

  • In Taiwan, we have this idea of decentralized wallets, an infrastructure that allows people to digitally sign without revealing personal details. I can sign saying I’m a Taiwanese resident or even a resident of Taipei City if it’s a discussion about city budgets, but it doesn’t reveal my name, address, Social Security number, or anything else. This “selective disclosure” ensures it’s one person behind one account, making it impossible for a foreign actor to control 5,000 accounts. That’s also the sort of infrastructure that helps.

  • So how would people go about legislating this? Our system here is a bit different. You have Congress and lawmakers who have to draft up a bill, get support, take a vote on the House floor, get approval in the Senate, and then the bill goes to the desk of the president. How, given our system here in the United States, do you foresee citizens going about getting something like this legislated?

  • Well, first of all, if something is shown to be technically possible and even preferred, then people will put pressure on legislators to make it happen. One idea I have is to make platforms interoperable, like Type-C connectors. One good thing the European Union did is say you must use the same connector standard—without specifying which company makes them—so you have this USB-C idea we’re all benefiting from. We’re no longer compelled to buy five sorts of micro-USB, Lightning, etc.

  • Imagine the same for social media, so that instead of Meta or X hoarding all the content, if you want to view it on Truth Social or Bluesky or some other smaller experience, you can do so freely. They can’t close the fire hose. They can’t keep the speech from being displayed in smaller communities. Interoperability is worth fighting for because it emboldens free speech and makes it harder for a large platform to censor content. If you’re interested, there are some policy blueprints about that at Project Liberty (the Project Liberty Institute). I’m a senior fellow there.

  • My hope is that we could try it out with TikTok through the People’s Bid, if we get that with McCourt and O’Leary, or with smaller platforms like Truth Social or Bluesky. Once people see it really works well, it transfers into legislative action.

  • And going along—just to pick your brain on authoritarianism—aside from the obvious CCP threat, which we’ve talked about extensively on Loomer Unleashed , what do you think right now is the biggest global authoritarian threat, especially regarding censorship and technology? We’ve seen the threat of China, and given your position as the cyber ambassador in Taiwan, I’m sure you’re constantly worried about the threat of a Chinese occupation and the CCP. But aside from that, what is the greatest global threat of authoritarianism right now?

  • As I mentioned, the authoritarian narrative is not just about censorship but about pushing the idea that freedom only leads to chaos, that it only leads to infighting. This idea is repeated by many people, not just the CCP, and basically encourages giving up hope that absolute freedom of speech and expression or assembly can coexist with mutual understanding. If we make it go viral that technology must advance freedom—and freedom can lead to shared understanding—that is a powerful countermeasure, a counter-narrative, to this notion that freedom only leads to infighting. Look at America, right? So I think we can all participate in this counter-narrative.

  • Wonderful. Well, thank you so much for coming on my show tonight, and thank you for answering so many questions. We have a lot more, but you’ll have to come on again. Hopefully, people will read your book, Plurality: The Future of Collaborative Technology and Democracy , and also watch your film Good Enough Ancestor . I believe you said it comes out on March 9. Where will people be able to watch that film?

  • I’ll share it on Truth Social—my handle is @audreyt—and on X.com —my handle is @audreyt there—and also on Bluesky. Most of all, I would invite people to follow the Plurality work. Plurality.net has a Discord channel. Feel free to drop in and chat.

  • Wonderful. Well, thank you so much for coming on tonight. I really appreciate it. Thank you for speaking out about these issues and the work you’re doing to fight for free speech and bridge these subjects of censorship, authoritarianism, and collaborative communication for the preservation of democracy on an international stage. I find it to be really interesting. I’ve been reading the book and reading your papers, and I’m really looking forward to watching your movie when it comes out.

  • Well, thank you, and thank you to the people in the chatroom for being civil. I see just now Flydog777 says it was informative and interesting, so that makes me really happy.

  • Yeah, well, thank you. We have a lot of people on Rumble, and we also have—let’s see how many people on X, because I stream the show on X and Rumble. I don’t know how many viewers we have on Rumble right now, but we have 60,000 live viewers right now on X. Hopefully, everybody watching finds it to be rather informative. So be sure that if you’re watching right now, you repost the live link on X and Rumble, and also be sure to follow Audrey on X and Truth Social. Thank you so much.

  • Thank you. Live long and prosper.

  • Thank you—wonderful!