• Alright, y’all—Saturday night! We’ve got a good one. We’ve been doing a lot of unlearning all weekend, and this time we’re going to unlearn something many folks are ready to unlearn: regulation.

  • The good news: we’re bringing to the stage two legends at the intersection of innovation and regulation. You’re going to hear from people who’ve pushed the boundaries of what’s possible—Audrey Tang, Taiwan’s first digital minister, best known for radical transparency and a long legacy of civic innovation, and Francesca Bria, who right here in Barcelona has been experimenting at the cutting edge of technology and democracy for years.

  • They’ll be moderated by Julie Brill, whose work spans government, Microsoft, and more. This conversation will get your minds turning. To start our discussion about unlearning regulation, please welcome three of our most amazing guests. Welcome!

  • Hello, everybody. It’s so great to see you all, and I cannot tell you how honored I am to be on the same stage with these incredible people. We’re going to jump right in, if that works for everyone.

  • Audrey Tang probably needs little introduction for many of you, but Audrey is so extraordinary that there are a few things I want to share. Audrey is a TIME 100 Most Influential People in AI honoree, Taiwan’s cyber ambassador, and served as Taiwan’s first digital minister. He…she…whatever… served the world’s first nonbinary cabinet minister.

  • Audrey helped shape g0v (gov-zero), one of the most prominent civic-tech movements worldwide. In 2014, Audrey helped broadcast the demands of Sunflower Movement activists and worked to resolve conflicts during a three‑week occupation of Taiwan’s legislature.

  • He did…she…whatever…did it nonviolently.— No?

  • Go ahead. Like, you can say “they.” And my pronouns are literally “whatever.”

  • Okay—forgive me. We’ll continue. Audrey helped develop participatory‑democracy platforms such as vTaiwan and Join, bringing civic innovation into the public sector through initiatives like the Presidential Hackathon and Ideathon.

  • Francesca Bria is also an incredible human being—a leading innovation economist working at the intersection of technology, geopolitics, and society. Francesca is an honorary professor at the Institute for Innovation and Public Purpose at University College London and a fellow at Stiftung Mercator in Berlin, where she leads the EuroStack initiative on Europe’s digital sovereignty. She’s a member of Spain’s International Council on Artificial Intelligence, established by Prime Minister Pedro Sánchez, and president of the Italian Innovation Agency in Bologna. I could go on forever, but I think I’ve embarrassed you both enough.

  • The question for this evening: Do traditional regulatory frameworks—built for industrial‑era technologies—still serve in the age of AI and an attention economy governed by algorithms? Are we clinging to outdated, ineffective regulatory platforms? What new models can promote democratic participation, technological sovereignty, and human‑centric innovation? How can regulation be reimagined as dynamic, participatory, and adaptive?

  • Let’s start with what “unlearning regulation” means to you both. What are the biggest challenges in the current regulatory environment? Francesca, would you like to go first?

  • Hi, everyone. First, I want to say how happy I am that MozFest is happening in Barcelona in partnership with the City—one of the most important participatory‑democracy experiments in the world. That’s not by chance. You didn’t mention that I used to be CTO of the City of Barcelona, so I always feel at home here.

  • What’s wrong with current regulation? I’d start with why we need regulation. Did we achieve our goals? We need regulation—especially now—because power has concentrated: economic dominance, technological dominance, industrial concentration, political power. We need to shift power away from large monopolies and authoritarian tech companies and states—toward people, cities, and communities—so that technology serves public and social needs, democratic accountability, better rights, a healthier environment, and better jobs.

  • If those are the goals—why we regulate big tech and why we need to spread power—then what’s wrong? I’d say almost everything, because we haven’t succeeded. Europe focused for five years on regulation: GDPR, the AI Act, the Digital Markets Act, the Digital Services Act—many acts. Proudly so. I worked hard with the Commission, Parliament, member states, and democratic institutions to enforce these. But we’ve learned it’s not enough. Are we enforcing them? Not really. And regulation alone is insufficient.

  • Two reasons. First, Europe is under threat for implementing its own regulation. Former President Trump explicitly said European regulators trying to enforce the AI Act—or tax or regulate big tech—risk sanctions and bans. If we’re sovereign, we should enforce our laws. If pressures prevent us from enforcing them—and we give in—we are not sovereign. That’s a lesson for the world: enforce regulations that prevent big‑tech dominance.

  • Second, trade deals. In the recent EU‑US agreements, we’re not only pushed to buy, say, tens of billions of NVIDIA chips, but also to sign away sovereignty—the ability to enforce our regulation—and even get pulled into culture‑war clauses like banning “woke AI.” This is written into trade deals. The stakes are high.

  • So what should we do? Good regulation that empowers citizens and democratic technologies cannot be achieved by civil society alone. We need democratic governments to side with us. We need broad alliances with trade unions, democratic parties, and governments. And technologists—those who know accountability, interoperability, open source, data sovereignty—must be in the rooms where regulation is written. How many of you advised your MP, regulator, or city hall? Not many—and you’re among the brightest people not working for big tech who know how these rules should be crafted.

  • We must change that. Regulation has never been enough. Beyond DSA/DMA, we must tax these companies to fund housing, education, redistribution. Not only antitrust—we need industrial policy to build alternatives. My focus—especially in a moment of techno‑authoritarianism and big‑tech alignment with authoritarian states—is on building the alternatives society needs. Regulation should create the space for those alternatives to exist. Less talk about regulation per se; more work, with people like Audrey, on building what society needs.

  • Amazing—I agree with every point. So, no debate! I’ll add a Taiwan‑specific nuance.

  • One reason techno‑authoritarianism makes inroads is that people proposing alternatives are deeply divided—because we live in a high‑PPM environment. Not CO₂ parts per million (though that’s important), but polarization per minute. Engagement‑through‑enragement drives the attention economy. Five factions that oppose authoritarianism end up splintered and see each other as enemies more than they see the authoritarians.

  • In Taiwan, we’ve been upgrading democracy as a kind of technology to cut through this high‑PPM environment by building pro‑social media rather than anti‑social media. Anti‑social media is broadcast: the megaphone amplifies the extremes. Pro‑social media is broad listening: the viral megaphone lifts uncommon ground—ideas that resonate across people who would otherwise disagree.

  • A quick illustration. Last March, I scrolled Facebook and YouTube and saw ads featuring Jensen Huang, NVIDIA’s Taiwanese‑American CEO, “giving back” to Taiwan with investment advice and free crypto. If I clicked, “Jensen” spoke to me in his voice—a deepfake clone running on NVIDIA GPUs. Many people lost millions. Poll people individually and they’ll say, “Don’t regulate content; keep the government out.” Taiwan leads Asia in internet freedom; people value net neutrality.

  • Individually, there’s no coherent bundle of ideas to push back. So we sent 200,000 random SMS messages across Taiwan: “What should we do together?” People shared ideas; thousands volunteered for an online citizens’ assembly. We randomly selected 447 people demographically representative of Taiwan. In a long afternoon, they met online in 45 video rooms of 10 people each, brainstorming what to do. We said up front: only ideas that reach uncommon ground within a room can propagate outside it.

  • Extreme views can’t persuade nine others; nuanced, workable ideas can. One room proposed: label all social‑media ads “probably a scam” by default—remove the label only if the advertiser digitally signs with KYC, a signature, and personhood credentials. Great idea.

  • Another room: if someone loses $7 million to an un‑signed investment ad that a platform pushed unsolicited into their feed, don’t just fine the platform—make it liable for the full $7 million, because the user didn’t subscribe to that poster.

  • Another room: TikTok didn’t have a Taiwan office, so it could ignore liability rules. Don’t censor; modulate reach. Each day they ignore our rules, slow their video delivery by 1%. Competition will reallocate ad budgets.

  • These ideas win applause because they’re surprising uncommon ground. The 400+ participants literally felt the energy as new consensus emerged. Then we used non‑hallucinating language models to weave insights from the 45 rooms into a coherent bundle—work a human facilitation team would need days to do. Everyone then voted. More than 85% agreed on the bundle; the rest could live with it and saw the process as legitimate.

  • That was March. In April we held multi‑stakeholder consultations with big tech—“Does this violate the laws of physics?” No. “Is it implementable?” Yes—just expensive. In May we amended the electronic signature act; in July, an anti-fraud act. This year, if you scroll Facebook or YouTube in Taiwan, you don’t see deepfake ads anymore. We fined Facebook nearly NT$20 million for violations. From broad listening to law in two months. You can do it too. Even if you didn’t get an SMS, your friends and family may have—so their ideas flowed into the uncommon ground.

  • Awesome. Francesca, you have a suite of ideas—similar to Audrey’s but in the European context—around EuroStack, digital commons, and data commons. Tell us about the solution set you’re driving in Europe.

  • It’s lovely here because Audrey is very positive about what can be built on top of big tech. I’m less so.

  • I’m focused on the critical infrastructure on which we’re trying to build democratic technologies. As an economist, I struggle to grasp the magnitude of a $3 trillion investment in AI infrastructure. Even Goldman Sachs says this could be a bubble poised to burst. Yet tech companies—often buying back their own shares—are convincing industries and all of us that it’s solid, because AI can be revolutionary.

  • But are we going to empower society and the environment with an infrastructure built on a Ponzi‑like scheme that extracts water, energy, data, raw materials, and labor—and enriches a few billionaires? It’s incredible—and outrageous. We should refuse this model. Not because Elon Musk might become the first trillionaire (though that’s outrageous), but because this extractivist planetary architecture of oligarchy doesn’t match what we need. That doesn’t mean we reject AI’s potential. It means we ensure AI serves social and environmental sustainability, workers, green and better cities, healthcare, and education. Can we do that on top of an extractive machine that creates new divides?

  • My usual narrative: Europe is squeezed between two digital empires—the Silicon Valley techno‑authoritarian model and China’s panopticon. But it’s worse: the extractivist machine fosters new colonialism. Europe risks becoming a new colony of the AI boom. The global south is further cut off: AI requires extracting even more raw materials and energy. We’re told to power AI with nuclear—rather than renewables—to meet climate goals.

  • We need a better industrial plan for the world: less concentration, less extraction, fewer billionaires and authoritarians; a system that works for the majority, particularly the global south.

  • When I argue for EuroStack—green compute; data as a common good; open source; interoperability; data sovereignty—the question remains: who are our allies? We don’t want a protectionist wall around Europe. Sovereignty shouldn’t mean state isolation. Technological sovereignty must mean popular democratic sovereignty, built through global alliances with other democracies and non‑aligned countries that reject new digital empires. Are we still in time to do this—or are we giving in to a project that destroys democracy?

  • Audrey, what systems would you like to see to empower individuals over corporate and governmental power? Is EuroStack part of the answer? Are vTaiwan and Join part of it? How would you weave these together?

  • When we brainstormed EuroStack in Brussels, I loved the idea of building it through interoperability, as Francesca said.

  • There’s precedent in the U.S.: under President Bush, number portability between telecoms. Telecoms hated it, but it meant you could switch carriers and keep your number—and your old carrier couldn’t keep your address book. That’s freedom of movement beyond corporate walls.

  • This isn’t American, European, or Taiwanese—it’s about fundamental freedoms: movement, association, expression. Digitally expressing those freedoms is powerful. In Utah, the Digital Choices Act takes effect next July. If you’re a Utah resident, you can switch from X to Bluesky or other open platforms and keep interoperability—take your community with you. New likes and follows flow to your new network.

  • That changes incentives. Platforms will have to compete on quality of care. Today, platforms trap you—humans in the loop of a platform’s AI—like hamsters in a wheel. The wheel spins faster; dopamine flows; but you can’t steer. The wheel is fixed; it’s a walled garden.

  • If we put tech in the loop of society, society can demand smaller, pro‑social alternatives. Then small alternatives can actually compete—because people can move, one by one. Big tech will have to innovate—build on open‑source recommendation engines that respect explicit preferences instead of subverting them. They’ll have to adopt pro‑social systems, or people will freely move away. A decentralized, interoperable EuroStack is stronger than building a vertical “European champion” stack.

  • Audrey, your book Plurality promotes interoperable, community‑driven systems—co‑authored openly with many contributors, including Glen, whom I know well. Francesca, you focus on European technological sovereignty. Are plurality and sovereignty aligned, complementary, or in tension? Audrey?

  • To me, Plurality is like a geothermal engine that turns the volcanic heat of conflict—the high‑PPM polarization—into co‑creative energy. We don’t shy from conflict; we bridge it into shared outcomes. Communities across divides—say, climate‑justice activists and people of faith practicing creation care—can see they’re talking about the same thing and agree on concrete policies.

  • Plurality is the precondition for the popular sovereignty Francesca describes. Popular sovereignty isn’t polling individuals in isolation; it’s connecting communities without destroying them—helping the parts within ourselves, pulled by multiple communities, cohere through cooperation across differences.

  • No contradiction—but let’s clarify sovereignty. As Audrey said—and as we’ve practiced in Barcelona—this is about popular, democratic sovereignty, not just state sovereignty. State sovereignty alone can veer into nationalism—the far‑right populism eroding democracy. Democratic sovereignty means people decide.

  • I also contrast democratic sovereignty with privatized sovereignty—the tech oligarchy. Europe is deeply dependent on foreign infrastructure: roughly 80% of the technology we use is imported; about 90% of cloud services are subject to the U.S. CLOUD Act; around 90% of chips we use come from Taiwan and South Korea; rare earths are processed in China; AI models aren’t made here; our data is continuously mined by these companies. That’s not just a trade deficit; it’s a sovereignty deficit.

  • Privatization extends to core democratic functions: money (stablecoins and private payment systems escaping public monetary governance—hence attacks on central‑bank independence), energy (who produces it, and whether it’s renewable), and welfare and rights (e.g., Palantir‑powered mass‑deportation systems in the U.S.). Who decides who gets benefits or is evicted—society through democratic processes, or a private boardroom?

  • We refuse the false choice between “state champions” and authoritarianism. We’re reclaiming democracy—together with plurality, community, diversity, and freedom. That’s what sovereignty should mean.

  • A quick lightning round: one bold idea. If you could implement one radical change to the regulatory landscape tomorrow, what would it be? Audrey?

  • Just one? A simple one: switch away from what I call “Max OS” (not macOS)—the maximizing operating system of utilitarianism. Much AI policy and training aims to maximize a metric—engagement, attention, GDP. Systems trained to maximize will find ways to hit the number while causing massive harm.

  • We don’t need that. We can train AI from an ethics of care that cares for a small community’s relational health. In our 45 rooms of 10 people each, a local agent with millions—not billions—of parameters can deeply understand the conversation and help bridge differences.

  • This is a different vision from a “see everything, do everything” false idol. In East Asia we have local kami—steward spirits caring for a forest, river, or community. If we train AI and govern AI toward care, we avoid threats from maximizing any single metric—no “paperclips,” no GDP above all. The doom‑versus‑acceleration debate is like a car with only accelerator and brake. Let’s build thesteering wheel.

  • I’ll be brief: tax the billionaires, regain democratic control over our data, and use the revenue to fund affordable housing, healthcare, and improvements that make life better for everyone.

  • Awesome. We have time for questions. Please go ahead.

  • Before Q&A, I want to acknowledge a distressing moment at the start that I know was upsetting for many. For context, before coming on stage, we discussed pronouns, and Audrey shared that one acceptable pronoun was literally “what/ever.”

  • Right. That was repeated without context, and without context it doesn’t communicate the same thing. On behalf of the festival, I want to acknowledge that it caused harm and apologize. We understand that was upsetting.

  • I would also like to apologize. I certainly did not intend to upset or harm anyone. We did have a conversation about it, and without context it was wrong. My deep apologies to anyone who felt hurt.

  • Thank you for the explanation; that adds context. It’s also incredible that late on day two, this room is full of people eager to discuss democracy and regulation—a testament to this festival and to all of you.

  • Francesca, you mentioned empowering people directly. One benefit of the internet is routing around coercive state power. In our frustration with big tech, we reached for state power to regulate tech—pushing billionaires closer to regulators and collapsing the triangle of power against individuals. I can’t think of a place more susceptible to this than Taiwan.

  • Audrey, when you instrumentalize these radical, uncommon ideas, how do you think about downstream effects and balancing public and private power?

  • Great question. In Taiwan, I believe the state should work with the people, not merely for the people—and certainly not for the government alone (I’m an anarchist!). The state is an instrument that channels people power toward widely supported goals—nothing more, nothing less.

  • Seeing the state as a broad listening device, not a broadcasting device, resolves much of the tension you described. This comes from our civic‑hacking movement, g0v (gov-zero). We registered g0v.tw, and for any government service at gov.tw that we don’t like, we build a shadow version at g0v.tw—open source, Creative Commons. Change an “o” to a zero, and you’re at the shadow site.

  • If, say, a contact‑tracing system during COVID wasn’t privacy‑preserving enough, g0v folks forked it into a zero‑knowledge version—forcing the government to adopt it in three days. Same with the mask maps—citizens built them, and the state adopted them.

  • The people closest to the pain—activists, human‑rights lawyers—often know the answer. They shift from protest (against) to demonstration (for). When we build that loop, career civil servants love it: they’re nonpartisan, in it for the long run, and eager to rebuild trust by trusting the people.

  • First, Julie, thank you for the apology. I want to share that that exact misgendering has happened to me in public—being called “whatever.” It activated me physically; I’m not the only one who felt that way.

  • We’re critiquing an information system shaped by a “bro‑oligarchy” of cisgender white men in the global north, many in San Francisco. Cis‑hetero patriarchy has been imposed globally through settler colonialism and racial capitalism. That’s how we arrived at datified control and the privatization of free software, our personal data, and planetary creative labor. If the theme of MozFest is unlearn, we must unlearn the gender system imposed on us and understand how it intersects with racial capitalism to produce the AI firms we’re fighting.

  • I do have a question: We haven’t talked about AI systems deployed for war and genocide in Gaza. What legal mechanisms—based on your experience as technologists and regulators—can we activate to push, fight, block, and demand that tech firms stop selling AI systems used for target selection (e.g., “Where’s Daddy”) and mass killing? These systems run on Google, Amazon, Microsoft infrastructure; these companies take the contracts. Workers are pushing back. Can the EU, Taiwan, or others leverage international law so that, once genocide is recognized, companies cannot keep operating or receiving state contracts? How do we treat them as war criminals?

  • This is a question about red lines. I’m a signatory of the AI Red Lines petition, which my friend Maria Ressa read at the UN General Assembly. Red lines include target selection, AI swarms, autonomous lethal weapons, and mass‑murdering systems.

  • Right now, public awareness of these red lines isn’t even at Montreal Protocol levels—when we collectively recognized ozone‑depleting CFCs as unacceptable. Beyond obvious cases like “AI robots must not launch nuclear missiles,” it’s hard to get to even a common level of awareness—let alone uncommon ground.

  • If we cross these red lines, the pedal/brake debate becomes meaningless; we’ll be off a cliff and the steering wheel won’t work. So we must raise global awareness—in our jurisdictions, towns, and classrooms—about what counts as a red line. Don’t “read the air” (guess norms); write the air together so it becomes common knowledge policymakers can’t ignore.

  • We need to strengthen the multilateral international system—the UN and humanitarian law—though it’s under attack. It’s obvious that defense technologies and automated weapons are accelerating. I’ve just worked on a map of the “authoritarian stack,” showing rising defense spending, AI weapons, and how specific Silicon Valley companies—aligned with reactionary ideologies—are assuming the role of a new military‑tech complex.

  • Gaza is a clear example we know about thanks to worker whistleblowers. Similar systems are being deployed in Ukraine. Concern is high across the UN, ICRC, and expert communities. Even Pope Francis—hardly the most progressive institution historically, but notably progressive under his leadership—proposed banning AI‑automated weapons at the G20.

  • We must mobilize the scientific community (more than industry). Think of nuclear history: scientists organized and warned. We need that for AI. It’s also difficult because the same pipelines power commercial uses and military uses. How do we separate? How do we keep beneficial AI while preventing weaponization we can’t foresee? We must empower the UN and international law to draw and enforce those lines—and build public awareness to support them.

  • Thank you. Last question—we’re short on time.

  • You mentioned transition funds—using revenue to build housing or address harms after tech disruption. If Waymo and AVs displace millions of drivers, we’d need funds to fix what tech broke. I love the idea, but how do we practically structure those funds now, proactively, to meet job losses and other impacts before they hit?

  • Even more exciting for a Saturday night—finance and taxation!

  • Beyond taxing billionaires (which we should), Europe proposed digital‑services and big‑tech taxation. Many efforts are blocked by backlash in trade negotiations with the U.S. Spain has a digital tax; enforcing it is hard, but we must continue.

  • Another key lever—very “sexy”—is public procurement. Roughly 70% of public spending flows through procurement. We must reform procurement laws to include sovereignty criteria—interoperability, open source, new architectures. Otherwise, we keep spending public money on proprietary cloud and NVIDIA chips. Set targets—e.g., 50% of procurement by 2030 for interoperable, open, sovereign tech.

  • On funds: In EuroStack we propose a European Sovereignty Tech Fund—to enable these experiments. Start at €10B; scale to €100B or €300B if needed. Much of this is expensive.

  • On jobs: We’re not discussing this enough. New jobs will be created, but we must design for public value and fair distribution of the productivity gains—affordable housing (a human right Barcelona has fought for), healthcare, green cities, better jobs. Design funds with clear outcomes and governance.

  • Finally, beware the macro picture: of the $3T AI build‑out, 60–70% appears to be private credit. That’s where a bubble can form. The rest is big‑tech balance‑sheet funding. This mix is not sustainable.

  • Thank you. Please join me in thanking Audrey Tang and Francesca Bria. Thank you so much.

  • Our speakers have done the impossible—making us optimistic about regulation! That’s rare. Let’s give them a round of applause. MozFest is a two‑way dialogue—we welcome being called in. Share feedback; tell us what you think. This is a special community.

  • That wraps programming on this stage for today. See you bright and early tomorrow. Have a great Saturday night. Good night!