• Hello, I’m here with Audrey Tang. It is fantastic to meet you and today I’d like to discuss some of what we’ve been talking about recently. Your thoughts on the collection of information surrounding disinfo and monitoring mechanisms, especially those set up by the European Union. Your thoughts on how we can make those trustworthy to the public.

  • Transparency is something that you mentioned. Yeah. What do you think people should know about ways to do that effectively?

  • Yeah. So in the Taiwanese example, the main repository that people can see what kind of information manipulation is going on, what is trending and so on, that is a common good. And because it’s a common good, it’s maintained by the civil society. For example, the collaborative fact checking ecosystem Cofacts was started by g0v, which is a civic movement, but then it counts in its ecosystem many private sector players, such as Trend Micro, Gogolook, the most exciting antivirus companies, innovation and so on.

  • But all of them share this same commons system. So that whenever people report, for example, even on end-to-end encrypted channels like LINE here, which is like WhatsApp, you can long tap something and report it, but the report goes into this multi-stakeholder, collaboratively maintained database, it doesn’t go to the government, which is why everybody can see very transparently what is going on and the government becomes responsive because we trained our civil servants and indeed during election times or the ministers to respond very quickly.

  • Because if you respond within an hour, then your response actually reaches more people more quickly than this trending information manipulation. It has a pre-bunking effect compared to the very laggy response. If you wait for 24 hours and then respond, that become debunking, which doesn’t work. The two main lessons I would share is that first, if you have a transparent, collaborative commons, then it’s better if it’s maintained by multiple stakeholders, including civil society and private sector.

  • And second, pre-bunking always works better than debunking. But you can kind of make pre-bunking work even after the fact, if you have a good enough monitoring system and you’re able to respond within 60 minutes.

  • Great. Okay, so I think that was a good overview. One thing I have to say, Audrey, is that you are very smart and what we’re going to need is to break it down, of course, into pieces.

  • Of course, let’s take a deep dive.

  • So, okay, so the Cofacts which you mentioned, I’m familiar with it, but for people who are not familiar with it, and correct me if I’m wrong, is that it’s a forum-like site where people, when they find information online, they post it and ask others, is this true, is this false? Sort of a community driven approach.

  • It’s like a Wikipedia but for fact checking. And it is also part of the media competence training in not just lifelong education or community college and so on, but also in our basic education. Because in 2019 we changed from media literacy to media competence. Because literacy is when you’re individual consuming information, but competence is when you see something wrong, you can do some investigation and then contribute to the common understanding.

  • So the students can fact check the presidential candidates as they’re having a debate and so on. And once they feel they can contribute to the society in this way, then collaborative fact checking becomes much more possible. Because it is the journey of going through thinking about which source is trustworthy and so on that inoculates a mind.

  • It is not seeing the checked facts that inoculates a mind. So the process, the journey is much more important.

  • Great. So one thing I wanted to ask about sort of that education model as opposed to just traditional media literacy, is how in depth does that education need to be? So there are so many skills involved like open source intelligence as far as checking a source, versus lateral reading, or inoculation. How extensive does it need to be for people to feel confident in themselves and to sort of be effective at this?

  • Yeah, that’s a great question. The first part is easier. We have seen from recent research that even just a few rounds of conversation with debunkbot.com which is a language model powered evidence conversation module, even if you firmly believe in the conspiracy theory because of the language model’s ability to go like point by point in presenting you with evidence, it actually has a very strong effect that people after a few rounds of conversation will lose kind of confidence in its original conspiracy theories.

  • So I don’t think it takes days or weeks or something. If you have a good enough module, and that is point by point and evidence based, then it just takes a few minutes to inoculate a mind. And especially if it’s done in a group setting. Research has also shown that in a deliberation setting where you’re exposed to people who believe in different things, different ideologies, but you manage to after conversation, agree on some common values.

  • This experience also inoculates the mind, sometimes for years, against polarization. So the first part is quite optimistic. It only takes a few minutes. The second part, like not everyone after they get inoculated become so good as to inoculate other people. That is not very easy. So I would say that it’s easier if we separate these two apart, if we just say, oh, everybody’s invited to go through this fact checking journey, but not everybody should feel the burden or the responsibility of kind of spreading the word, spreading the vaccine to other people.

  • That is, I think, a bunch of very dedicated and sometimes a little bit nerdy people who really find joy in this kind of thing. But just look at Wikipedia. I think it’s just 10% or fewer people doing the editing. Mostly we just help fixing typos or something.

  • Okay, yeah, yeah. So very cool. That definitely rings true. I have a background in developing these sort of inoculation interventions and I recently spoke with some researchers at the University of Cambridge who have a similar thing which is deliberative processing with people and they’re creating an AI that can do that, which it sounds like.

  • Yeah, exactly. Oh, exactly, yeah. It’s a very interesting observation in how previously, because facilitation and deliberation takes years of training, it doesn’t tend to scale that much. But nowadays with language models and especially with kind of sense-making modules, people suddenly discover, oh, it’s becoming much more scalable and therefore are much more willing to try large-scale experiments.

  • So one thing I wanted to ask you about. So this seems particularly pertinent when there are, say political issues or where there is something on which a lot of knowledge or research has been conducted. So when people are forming their viewpoints, they can coordinate with others or check basic facts. The other side that we sometimes see in disinformation is when there are exactly zero, there are data voids in which there’s just nothing and actors come in to fill it.

  • Do you have an example of that?

  • So for an example would be like hurricane. So a hurricane happens and there is generally speaking almost always some comms crisis of sorts where people need to know where to go, you need to know where to get water. It’s unexpected. Similarly like the pandemic. So when people were trying to figure out the origins, it was very easy to have a conspiracy and you couldn’t quite have a deliberative process because there just wasn’t quite enough credible information to provide to the public.

  • I know I’m giving you a hard example, but.

  • Yeah, but there’s a word for that. From who? It’s called infodemic because there’s so many requests for information, the desire is so strong, but the science is not there yet. So this void creates very fertile ground. If there is a scientific consensus, then these ideas would not even spread.

  • But because of the lack of scientific consensus, it’s the perfect time for them to grow into an infodemic. And exactly like virus, but mimetic, the most potent one, goes viral.

  • Gotcha. Okay, great. So one thing I wanted to ask you is you describe this mechanism of crowdsourced fact checking, source verification, all of that. What do you see as the intersection? And you did talk about this when giving the holistic viewpoint, but just to break it down for people listening, the collaboration between this collaborative environment and then more traditional monitoring groups or civil society organizations who have, you know, a mandate to do that, can those reinforce each other?

  • Yeah, definitely, definitely. So Wikipedia is actually a very good example. There are many groups, meetups and so on that just narrowly focuses on, for example, heritage buildings or museums or things like that. So whenever you try to add something on Wikipedia or edit one of the existing articles, all their members get pinged and they made sure that it’s up to the rigor and so on that they care about.

  • So a good commons project like KOFact serves as a hub. So it’s not that we entirely rely on crowdsourcing, but rather it’s backed by sometimes professional, internationally recognized Pointer Institute level Taiwan Fact Check center or Michael Penn and so on. And they because have limited resources in each of those teams. So they rely on the crowd flagging to show which ones are trendy.

  • Because if this is not trending, if this only has a replication factor, basic reproduction number of less than 1, then it doesn’t go viral. And so therefore probably not worth your time to do professional fact checking. But if it is going viral, then the great thing about the commons is that it can show very quickly that something is going viral.

  • And every day there’s really only just two or three of these things. And so then the professional fact checking group can then focus on these. So this is how they collaborate. And we are also seeing that based on the previous fact checking, sometimes the cofacts team are training language models so that even before the first professional team comes on stage, they can already provide this kind of instant rapid response based on what they previously did to at least frame it so that people can understand, oh, this is a mutation of a mimetic virus a few years ago, but now it looks almost the same.

  • So maybe think about it first before the professionals enter the stage.

  • Interesting. Okay, yeah, that is very cool because I used to work in inoculation and we would have to give them tactics and techniques that we’d identified as being important. But the process you’re talking about, it sounds like the community kind of learns organically, which I think sounds fantastic. Another thing I’d want to follow through because I think for a lot of people, Wikipedia is kind of magical.

  • A lot of people don’t quite understand how it works or, you know, the Internet is filled with so much, how do I put this politely? Garbage or things that don’t necessarily work the way that they should. So what for something like Cofacts or…

  • Wikipedia or OpenStreetMap, or how do you…

  • Get it so that you have a community that is trustworthy and what do you do with sort of bad actors or people who are trying to change the information environment of that community?

  • Yeah, I think many of the most successful open source Commons infrastructure are reliant on offline communities, like literally people meeting every week. So that is why I refer to the meetups. So in a sense, the face to face nodes build the civic muscles, the trust between people, and then many of those different nodes link through weaker links online.

  • But for each particular subject, for each particular field, there exists a very strong offline connection, like the g0v community itself. For more than 12 years now, g0v meets every other month, usually in Nankang, in our national academy. That is what enables informal ties between the civil society groups to grow into stronger bonds over time. My main suggestion for the EU context is not to bypass your existing community level face to face associations and civic groups and so on, but rather tap into them so they can link laterally, horizontally.

  • Okay, great. Because. Yeah, that was. You already sort of got it where I was going to go, which is how to transpose some of these findings to the EU. So you’re not saying that we have to scrap the entirety of the projects that have been going on, but rather to integrate them with existing.

  • More on the ground, it sounds like.

  • I mean, I read some interesting things about the Taiwanese context in that people generally have low trust in the Internet. So a lot of the media literacy programs were on the ground.

  • That’s right. And also we have some of the lowest isolation and loneliness levels among developed countries, which means that people find meanings in the face to face connections.

  • So within the European context, I suspect that. And I would need to look into this, but trust in things found online is likely similar community ties throughout the Western world are, you know, getting harder and harder to maintain as a trend. So as part of your solution, it does sound like community generation and trust building alongside having that community apply themselves to this particular issue.

  • That’s right, so like having the civics classes empowering students and having the community colleges empowering lifelong learning students and the various different like spiritual centers and health centers and so on to help the helpers so that the helpers who are already a set of community organizers can then help foster a healthy trust ecosystem within their community.

  • And what Cofacts and other g0v projects have done is sort of providing a clearing house, an infrastructure so that the individual learnings, individual fact checking, are not just trapped or siloed in their communities, but rather can fruitfully feed into each other. So that’s more like a connective tissue than anything that’s top down.

  • Gotcha. Gotcha. So yeah, what I would say as the next part is having worked in government, you know, a lot of these issues are often it gets down to the nitty gritty very fast of logistics, budgets, where to go, how to measure efficacy is definitely a very big thing.

  • Yeah, what’s the ROI? So yeah, what would you say as far as, you know, when other governments are looking at this, how what they should view as successful and are they going to need to, I don’t know, increase taxes or you know, stop spending on potentially other important services? What would you say about sort of the expense side of this?

  • Yeah, that’s a great question. I think we need to see this as a media environment. And just in traditional journalism, where the Hutchins Commission has long ago had this entire book about what constitutes a healthy media ecosystem, the two main properties are that it needs to be bridging, in the sense that people were polarized or people have diversity. That’s a very good thing.

  • But then the media need to provide kind of balanced viewpoints so that people can see that my fellow citizens, although we believe in different things, are also human and there are common grounds that we can agree on. So that’s the first thing. And also it needs to be balanced, meaning that if there are these many diverse viewpoints in a society, it should not just let one voice dominate, it should surface the other less commonly known voices so that it can contribute to the epistemic commons, meaning that people can collectively know more than each of us individually.

  • So if you apply this to the social dialogue network landscape, then it’s very clear that you can then involve the people who are building those bridges. For example, the community notes on X.com or on YouTube, it’s like jury duty. So you can measure like before intervention, after intervention, whether the polarization level has gone down.

  • You can do some randomized survey to say after a deliberation at a civic assembly, whether you hate people of different ethnicity or different political parties less or more. So that could be quantified. The depolarization effect should be the key attribute for you to monitor, and that’s the bridging part for the balancing part.

  • I think one of the main ideas that we pursue is whether people younger than 18 or people who are not citizens but just residents, or people who were otherwise excluded from political speech and voice, feel that their viewpoints are being put on the table in such ecosystems. If you’re building a clearinghouse of collaborative fact checking, pay special attention to the non-citizen immigrants or pay attention to people younger than 18, because if they see this as somewhere they can contribute meaningfully to the society, then even if they’re not citizens de jure, they are citizens de facto and that helps bring in the balance of the society.

  • So these two things are worth investing in and have clear measurable outcomes. And then based on these, I think most of the other costs can be then borne by the private sector or the civil society. We were able to see many unicorns or good startups just building on this commons, which is infrastructure that is contributed by the public and non-profit sectors, and then have very profitable systems and services and tailor-made solutions and so on, on top of this shared common ground, and then their investors will pay the rest of the bills.

  • Great, great. Okay, so I’m glad you mentioned not just the ROI, but how the system can sort of maintain itself in a positive way. So I would kind of want to go back. So for a reader or I guess a listener, if you were starting at absolute ground zero on establishing something like this, where would you start?

  • So would it be with setting up the, you know, online infrastructure of a community system? Would it be sort of this on the ground work both at the same time? In your view, what would be the optimal way? And you know, keep in mind, no place is going to be starting from ground zero.

  • But ideally, ideally, yeah. What is the ideal way this could go?

  • Well, ideally I would start with broadband as a human right anywhere in Taiwan, you’re guaranteed to have like 10Mbps both ways. Like even on top of Yushan, which is nearly 4,000 meters high, almost, or rural islands using satellite connection, because if you don’t have that, then you essentially exclude a bunch of people who can only be on the consuming side because broadcasting still works, but they don’t have the bandwidth to contribute.

  • Once you build this kind of digital democracy system, online citizen assemblies and so on, and then say, oh, I’m sorry, you don’t have good enough bandwidth so you can only listen, then it creates a rift, a divide in your society. So nowadays providing high bandwidth bidirectional communication is not as expensive as it were, especially now with microwave and satellite.

  • So I would definitely say invest in that first so that you do not have the classes of haves and have nots in your country. Now, assuming that this is done, the next thing is to work on competence education to ensure that people in the school, and also school teachers and lifelong learning can see contributing to the commons, to what people generally see as, you know, their discovery journey, their things they learn in school and so on.

  • Not just in their head, but rather out there with the society. And this act of sharing is the most important because if you built the agency, that is to say the feeling that they are contributing to the society, then it, it doesn’t matter if they get things wrong sometimes, as long as enough people do that, the ecosystem of knowledge usually converges on something that looks like a common ground after a while.

  • But if only a few people are willing to share, but most people just passively consume, then again it creates an asymmetry, a power imbalance.

  • Okay, great. Yeah, I think that is a very good point that, you know, we have many contexts where our ability to contribute to or have a voice is through voting. And still many people do not feel empowered enough that that matters to them. So very definitely a mindset portion and you know, education and not just a mindset, but also creating.

  • Yeah, I mean, if they can’t vote yet to the presidential election because they’re too young, at least they can vote in participatory budget within their school or something. You can do that every week. You can have citizens’ initiatives and many other platforms for them to exercise that muscle even before they turn 18.

  • Okay, great. So we have the foundation of a society where people have trust. Now sort of moving to the sort of online components of, you know, let’s say we’re starting at where we are now, which is a variety of different platforms that are, you know, privately controlled and sometimes, whether they feel like it or not, can weigh in on issues of information disinformation.

  • Where would you see sort of a position for either, you know, government actors or citizens themselves to operate within that intervention environment that may have a lot of, you know, misinformation or, you know, otherwise harmful content? Harmful being difficult concept to describe, but like, you know, hateful rhetoric or just.

  • Generally bad for mental health.

  • Yeah, or like calls to violence I think are pretty specifically known to be bad. Yeah, but yeah. So what would you suggest? Sorry, sort of a broad scope in questions. But we have this thing, you know, idealized people have civic education. Next they’re confronted with an information environment that is similar to what we have now.

  • What would be sort of the next step in the online sphere?

  • Yeah, so one thing I would definitely suggest is not getting addicted to the touchscreen. Because if you’re addicted, your mind works in a very different configuration that makes it much more susceptible to spreading viral hate or harm or things like that. And if you’re looking at it in a non-addicted perspective, then maybe you learn something from it, maybe you ignore it, but at least the harm is not perpetrated by the consumer.

  • So for example, my phone is always in grayscale mode because that’s the way that my mind works. If it’s colorful, I get addicted. If it’s grayscale, it doesn’t. Or people sometimes interact with the stylus or with a keyboard and mouse and again not a touchscreen, so it’s less likely to be addicted. So I think one large part is to show people how the mind works differently in different contexts.

  • For example, in classrooms in Taiwan we always prefer to use large tablets or large laptops. So the screen is for sharing and not for isolating against other people. And we do that very consistently all through K12. And so showing people the right mode to engage with the social media content, I think that is one big part.

  • Also another big part is just to make sure that the online ecosystem have a kind of fact checking component in it. I mean, on X.com there’s community notes, you can sign up for jury duty, YouTube, there’s something like that as well. And I think going forward we can gradually say that if you’re a social media ecosystem and you allow bots to run rampant and you do not secure like KYC or know your customer or digital signatures and you allow people to pay for advertisement to impersonate, you know, celebrities to sell crypto scam or something, then perhaps the platform needs to be liable for the damage that is caused.

  • It is already this case in Taiwan. If Facebook does not secure the digital signature or KYC of a celebrity and post advertisement and somebody gets conned for 1 million, then Facebook is now in Taiwan liable for that 1 million, which is why they have implemented good KYC rules and along with TikTok and YouTube and so on.

  • So this rule was crowdsourced by an online citizen assembly in Taiwan and many other jurisdictions. I’m not saying they should copy what Taiwan has passed, but at least they can consider running something like a citizens assembly so that they can show conclusively to the social media companies that if you violate these, and we slow you down or if you even get blocked, it is not an authoritarian government doing censorship.

  • It is exactly what the people want and you’re violating that. Yep.

  • So, okay, so citizen input on how these information platforms are run. That seems like a very effective way to avoid the tricky issues of, you know, top down governments dictating to platforms.

  • That’s right. Without democratic control. Yeah, because when we run the citizen assembly online, we found that the people don’t want the government to look into the content. Like content level moderation is something they don’t want the government to do, but digital signature actor level authentication, they’re like, yeah, of course, let’s do that.

  • Okay, great. Because yeah, I think with a lot of the discussion around these, you know, governments sometimes treat it as a military issue or, you know, foreign policy issue. But it sounds like, you know, that these measures that you’re talking about and more directly operating with the public are sort of a different way.

  • Right. It’s a commons issue. It’s a democratic governance of commons. Right. Like in some European countries, the water is a commons. So of course people have to come together and deliberate about that. And I work with Project Liberty, I’m a senior fellow at Project Liberty Institute and the founder Frank McCourt is a very experienced builder of physical infrastructure in cities.

  • And he says that nowadays social media, some of them are like mixing the tap water with the sewer system. And if in the city even just, you know, 2% of sewer gets mixed up with the tap water, you would say it’s a broken system. You wouldn’t say the 98% are fine.

  • So that is a matter of commons. And of course people have to come together and deliberate.

  • Great. And so what would you say as far as you mentioned, the authentic authentication procedures? So we have seen that in Taiwan successfully because being used to hold platforms accountable. I think it’s also worth mentioning sort of the case of South Korea where, you know, they were attempting to limit anonymity on the Internet and we saw that it was by and large just not generally considered to be successful.

  • So how is this something you would generally advocate and if so, how to do it successfully?

  • Well, I mean, in real life, like I’m talking to you, and if you anonymize yourself like, I don’t know who you are, and maybe you wear some masking apparatus still, that I know it’s one person that I’m talking to. It’s not likely that the next split second you will be able to appear as 10,000 people filling this room.

  • Right. But that is what’s happening with bots online. So I think it’s one thing to say that as a person I should enjoy anonymous speech. It’s another thing to say I should be able to run 2,000 bots, each pretending to be a different person. I don’t think there’s a right, as part of freedom of expression to impersonate other people or impersonate 10,000 random bots.

  • And those bots should not have freedom of speech. So the idea of personhood credentials is not to show anyone any of your private information. The idea of personhood credential is just to show that you’re not a bot. I think we need to very clearly delineate these two. And using more advanced technologies like zero-knowledge, it is now possible to prove, for example, that you’re a resident of this country without disclosing where you live.

  • It’s possible to disclose that you’re above age 18 without disclosing your birthday or your birth year. And it’s possible now to disclose that you’re not a bot without disclosing anything else. So we should encourage investment into those privacy technologies so that we can solve the bot-or-not problem without compromising anonymity.

  • Okay, great. So, yeah, what I would also want to talk about is, you know, people are scared at this point of technology. Like you mentioned this in your book, that in a lot of democracies around the world, you know, we mostly see it as. We haven’t been able to keep up with technology.

  • Technology as just moving too fast, something happens to us. Yes.

  • So, yeah. What would you say about ways we can think about it? Because to the. For the. For most people, most jobs are non, you know, not centered around technology. Our education systems may have, you know, the basics of computer science, but not necessarily, you know, enough for most people to talk in intelligible ways about, you know, blockchain technology and that sort of thing.

  • So what do you think would be a good way to sort of bridge that gap as, you know, input in society increasingly requires more technical knowledge? I mean, the obvious answer, I think, is education.

  • And broadband is a human right.

  • And broadband is a human right.

  • But, yeah, so, yeah, I mean, in addition to that, I would also say that the more technologies are in the open, as in free software and open source, the more likely that people will be able to bend the technology to their actual needs so that it doesn’t feel like something happens to me, but rather something that I can tinker with, remix and so on.

  • And so once people get into that mindset, like a maker mindset, it’s much more likely that people will see technology as something that works with the people, not just for the people. And so part of what the government can do is to invest in what we call public code, which is open source technology, coupled with some code of conduct, code of deployment and so on.

  • So it can be rolled out as infrastructure. And I think in the European Union there’s already very good examples like the German sovereign tech fund and so on that funds this kind of effort. The main challenge nowadays is just that people are not generally aware of it. People know that there is Wikipedia, people know there’s OpenStreetMap, but not necessarily contributing to it or joining a community.

  • So there’s a real disconnect in Europe and other places of people, people who are technologists working in the free and open realm, and people who are community organizers and educators and so on. So one of the main things that the government can invest in is just to bring these two communities together because they work on the same ethos.

  • And once they empower each other, then you get something beautiful which is a civic technology community.

  • Great, great. So, yeah, I think for now those are all of my questions and is there anything, you know, our audience is probably going to be mostly in Europe, but any other thoughts that you would just sort of last minute want them to keep in mind?

  • Sure. I always say to give no trust is to get no trust. So instead of asking people to rebuild trust, first ask how much can you trust the people? If you work in policymaking, you can make your decision-making process more transparent, more auditable. You can trust people with real time open data so that when things happen they don’t need to ask for freedom of information or something.

  • They can already look at the entire history of relevant data to make decision with. Because then the trust is built in a way that is part of the social fabric. Instead of just focusing on a few stakeholder groups doing consultations and so on, you see gradual increasing of trust. In 2014, the trust level from the citizenry to the Taiwanese government was 9%.

  • It was very, very low. But based on radical transparency and civic participation, we’re able to rebuild that. In 2020 it was 70% and it’s now, like, consistently over 50%. So I think we were able to essentially recover from the trust crisis not by asking people to trust government, but having the public service trust in the people.

  • All right, great. Well, thank you for speaking with me. It’s been a pleasure. And hopefully we’ll speak again.

  • Great. Until next time, live long and prosper.