-
Welcome.
-
Great. Thank you so much for meeting me. I know you have a busy day already. Just a little bit of background, this is my master’s thesis at Harvard. I was most recently here at AIT during the summer as an intern here, but I was fortunate to be the acting spokesperson, when on my time here. I was able to see you. I saw the events that you hosted.
-
The digital dialogues.
-
I know. There was one just last week too. [laughs]
-
That’s right.
-
Great. My thesis topic is on the effects of Chinese sharp power on Taiwan’s 2020 elections. I’m studying this to provide some, in a sense, implications for the US, and how the US could help Taiwan and also its other allies.
-
The other way around. How Taiwan can help.
-
Yeah, I have a set of questions I was wondering if I could talk to you about, specifically on what the government is doing in terms of technology and innovation.
-
To start off, I just want to have it on the record, is there Chinese sharp power attacks in Taiwan, and if so, how does the Taiwanese government detect it?
-
As a matter of fact, when you look at the Taiwan FactCheck Center, tfc.taiwan.org.tw which is an independent nonpartisan, nongovernment funded, social sector organization that takes a look at trending disinformation and do attribution work, investigative work on it, it’s very clear that there were several messages that originated from the PRC.
-
For example, during the anti-ELAB protests, there was a trending message in the Taiwan social media landscape that tried to portray the Hong Kong protesting people as so-called rioters, that pays alleged $20 million to murder police.
-
That is of course a gross misrepresentation of their mission, but what this is trying to do is obviously try to lessen the effect that the anti-ELAB movement has on the Taiwan presidential election.
-
You see it in actually more trending in the Taiwanese social media than in Hong Kong, because they would easily see through this disinformation package. It’s certainly not targeted at Hong Kong audience.
-
Now the TFCC actually attributed original wording of this already heavily remixed, and illustrated version of this disinformation back to the Weibo account of 中央政法委长安剑, the PRC central political and law unit’s Weibo account, and they provided the original fabrication that misuses Reuters photo, and try to paint it as a kind of teenage protestor to engage in violent activity just to buy some iPhones and things like that.
-
The second level remixers then create supporting narratives on that. It is clearly attributed as a PRC action.
-
(pause)
-
Have you or the TFCC saw a similar attack before, during, after anything related to the most recent election in Taiwan?
-
The one that I just cited, which is number 204, was published by the TFCC in November 15, so already quite close to the election. I would say that it was part of the pre-election disinformation landscape. After the election as you pointed out, there’s a flurry of disinformation packages that tries to invalidate the election result, and meanwhile, try to sow discord between Taiwanese people and the US.
-
For example, there was a disinformation package that says when Tsai Ing-wen was printing the voting ballot, she instructed the CUC to use a special invisible ink developed by the CIA so that no matter who you vote for, you will end up voting to Tsai. This is of course against the physical law [laughs] the law of physics.
-
The clarification was mostly interviewing chemistry teachers, but the more importantly, we can see the intention of this disinformation. It’s not only to sow discord in the democratic process, but try to point a finger at the US.
-
What really this is trying to do, is try to make people feel less secure about the democratic process, and thereby engage in a more authoritarian way, and make them more susceptible of future disinformation packages. That’s my take, what is doing postelection.
-
Now moving to more general topics, what policies are you guys implementing right now to combat not only Chinese but general propaganda and disinformation?
-
A lot of disinformation don’t quite spread by itself. It spreads only through people who want to remix it, to mobilize people in outrage. That is main emotion behind most disinformation packages.
-
If you hear about alleged young people being paid to murder police, or if you hear about CIA supplying invisible ink, people will feel angry and helpless, a little bit, which is a kind of paralyzing emotion. They get out of this emotion by clicking share, and write something motivating for their social groups. Then it carries their credentials for their friend circles.
-
That’s how disinformation spread by provoking outrage. It’s kind of like memetic virus outbreak. One of the way that work with the fact checkers, for example TFCC, is not only to provide them with real-time clarification messages. For example, on exactly how the ballot’s printed. It’s printed by each municipality and cities, and the CUC doesn’t actually print anything.
-
(laughter)
-
That’s the factual response. It’s also equally important that we frame our clarification messages in a way that is humorous. This idea of a timely response from rumors and clarification, which require the kind of deadline is two hour after each trending disinformation is detected. Many ministries can now do so in 60 minutes.
-
What they do, is basically provide a clarification to a trending rumor by using humor, especially humor that corresponds to existing memetic, what we call “gung” here, which is memes, Internet memes, to ensure that people find it genuinely funny, hilarious even. When people laugh about the humor, the anger is vented into humor.
-
They would also organically share humorous messages, so the clarification become viral more so than the disinformation, and because anyone who view this humorous repackage of clarification and laughed about it, becomes immune to outrage, because humor and outrage are two outlets of anger and they are mutually exclusive. Then people become inoculated against this kind of outrage.
-
There’s a new political party formed around this very idea, called the 歡樂無法黨 or the Unstoppable Happy Party.
-
(laughter)
-
That’s not the same one as the Froggy one, is it?
-
That’s the one.
-
That’s the one.
-
That’s it. “Can’t Stop This Party” is the name, with Froggy, Retina, Shasha77 and Brian.
-
OK, great. You guys create memes so that it can be spread more quickly?
-
Mm-hmm.
-
OK.
-
(pause)
-
Could you explain a little bit, you mentioned that humor and outrage are two outlets of anger, and they’re mutually exclusive, so when they use humor, that prevents the outrage?
-
Yeah. Just use one concrete example.
-
Yeah, I’ve seen this one.
-
There was, of course, a disinformation, or maybe just misinformed, but any misinformation is a potential venue for disinformation attacks. There was a rumor that says if you perm your hair multiple times within a week, the state will fine you one million NT dollars.
-
The clarification message produced by our premier within an hour is saying this is false, and then a younger version of himself saying, “I may be bald now, but I would not punish people with hair.”
-
The fine print saying, “What we have done is introduce a labeling requirement that begins in 2021,” and then a premier as he looks now, and this is the memetic payload that says, “However, if you keep perming your hair multiple times a week, you will not damage your pocket, but you would damage your hair, when serious, you can look like me now.”
-
That is good humor, because this is not really satire, he’s not making fun of other people, but this is a kind of humorous message that makes you, in Mandarin we say 「愈幽愈默而愈妙」, makes you smile. This became viral, and within a couple hours if you look in search engine for perming hair fine, whatever, all you see is this picture, this meme.
-
The original disinformation is nowhere to be found.
-
Let’s see. I know in technology, we talk a lot about measure effects, make sure effectiveness. Have you guys been able to find some sort of metrics to…
-
Yes, certainly.
-
To measure like your policies, and how it’s been effective, or…?
-
For example, LINE built this dashboard, and LINE was the most difficult one to get metric from, because it’s all end-to-end encrypted. There really is nothing for search engines to absorb. Even in this end-to-end encrypted channel, the LINE CSR department in Taiwan built this dashboard to offer insight into all the disinformation that’s been flagged by users, voluntarily.
-
It’s like donating your inbox spam email to Spamhaus, so they can analyze who are sending those unsolicited spams. It doesn’t infringe on your privacy, this is people voluntarily sharing emails they don’t want to receive. The same with LINE, people can flag to the official LINE fact checking account the information that they suspect to be disinformation packages.
-
They offer an insight into the trending ones, an accumulated flag amount, which is quite high by now, and unique messages, because one may be flagged multiple times, and this is important, which is the positive impact of already clarified by their four partnering fact checkers.
-
There’s multiple affordances, for example, there was a bot 美玉姨 that can automatically feedback those clarification into the group where it originated from, or Trend Micro developed a very similar bot, and LINE also sends it back to the people who flag this as disinformation.
-
There’s already more than 1.5K such positive impact as of today to move from the fact checkers back to where this origin of disinformation comes from. This is important because it helps building an attribution chain of where the remixes happen, and where do intention enter the picture.
-
Does LINE work with TFCC as well?
-
Yeah, they work with TFCC, with CoFacts, with Rumors and Truth, with Mygopen, and CoFacts itself is crowdsourced and is all open source anyway. If you go CoFact singular, you remove the S, you see the Thai version of CoFact, because they also use LINE there. Their civil society just use the same open source technique.
-
Is any of this supported by the government or is this all from civil society?
-
The idea of multi-stakeholderism, is the government do what we are best to do, which is providing real-time clarifications as quickly as possible, certainly within one news cycle so that journalists can do balanced report. They don’t have to take our word for it, but they won’t have to hold their publication cycle waiting for the government.
-
We’re committed to make at most two-hour responses to each trending disinformation, so that’s what our part is doing. The Taiwan Fact Check Center is firmly in the social sector, and not controlled by any political party or the government, so they make their own judgments. The global multinational companies such as FB, actually take the TFCC into their algorithm.
-
When the TFCC says this kind of teenager murdering police in Hong Kong, or this CIA invisible ink is attributed as false, it actually stops being shown on people’s newsfeed that much. You have to scroll quite a bit to see those buried messages.
-
This is akin to moving that incoming spam into the junk mail folder. It’s not a takedown, it’s rather like a public notice. Once you see it, it shows only back to the TFCC article.
-
It sounds like you guys work a lot with private companies but also with…
-
The journalists and social sector.
-
…with the journalists, social sector. Who is coordinating all of this?
-
In a multi-stakeholder fashion, the mechanism itself, the multi-stakeholder model is doing so. There’s regular sharing events, there is a self-regulatory, what we call a norm package signed by the likes of Yahoo!, Google, Line, FFB, and PTT, and PTT in particular is interesting, because it’s an open source, open governance project. It’s a kind of social sector version of Reddit.
-
They can also pilot many new ideas, and see if it does have an effect on disinformation disarmament. I would say that the social norm building is not particularly coordinated by any one single piece of political mechanism, it’s rather by creating a space of free exchange of ideas, and honest measurement of what worked and what not.
-
It’s not just in the administrative branch either. For example, the corrective branch, the Control Yuan, which is a separate branch in our constitution, helps by publishing since the mayoral election last time, the raw data of political campaign donation and expense. Previously they only published statistics, and only they have the raw data access.
-
Now, they publish in structural data, enabling investigative journalists and data scientists to draw their conclusions. This is key, because we can see plainly from the Control Yuan data from the mayoral election that many precision targeting, political advertisements on FB and other social media, is not declared as campaign donation, nor expense.
-
They say we’re just random supporters paying a lot of money to push for a campaign’s message, but you don’t see it in the Control Yuan, but the Control Yuan establish a new norm, which is structured data, raw data access, and open license for everybody to analyze.
-
We can then say to FB that it’s not the administration putting you in pressure, it is what our society expect as a norm for political contributions and expenses.
-
You can help in our mission in conforming to the social norm, but also open up your advertisements library so that in real time people can see what any candidate is using what we call dark strategy to micro-precision hyper-target a certain group and spreading disinformation to discourage them from voting.
-
If people do so, but knowing that this will only be published in statistics after the election, there’s a lot of incentive for them to do so, if they know that this approach, their effort will be discovered within an hour by independent investigators and journalists, and they will face social sanction for doing so. They will refrain from doing so.
-
FB agreed to publish the ads library as least as transparent and accountable as the Control Yuan, and do it in a real-time fashion. That’s like the honest advertisements push in the US, while Google and also Twitter simply say, “OK, so during the election we don’t run political and social advertisements.”
-
It sounds like a lot of work is a whole society crowdsourcing model.
-
It’s norm-building.
-
Yeah, norm-building. Now, I want to talk a little bit about the 2020 elections. I was wondering, what were you guys able to learn from the 2020 elections regarding Chinese misinformation and propaganda?
-
First, decoupling referenda and election is a really good idea. In the previous mayor election, each of the referendum topic was a point of disagreement in the society. They automatically split the society in half, well-intentioned even. It’s not even disinformation. They provide natural opening for disinformation packages to sew discord and to decimate trust.
-
Add to that that certain mayor candidates are also proposers of referenda. Adding to that that there’s no campaign on the election day, but there could be campaign on the referendum day for referenda, which is the same day.
-
(laughter)
-
That massively complicates the message landscape making disinformation clarification almost impossible when you can freely spread disinformation packages concerning referenda that also has a motivating effect for other referendum items [laughs] as well as mayoral elections. It’s impossible to take care and clarify in real-time. There’s different, what we call, attack factors or attack surfaces.
-
Now, we’ve moved the representative voting for people to a year, and then the referenda voting for issues to subsequent, and then voting for mayors, and voting for referenda.
-
In alternating years, people’s focus will alternatively be only on political parties and people for the election days on the election years, then focusing on issues for deliberation and more deep policy discussions on the referenda years. We know how to defend if they don’t occur in the same day.
-
Have you found that…
-
Because there’s less attack surface, there’s less impact as well for disinformation and propaganda.
-
Have you guys seen that there were less propaganda and disinformation in the 2020 elections compared to the 2018 elections?
-
That’s a Doublethink LAB question. There are people who work on quantitative studies that can answer your question in more quantitative fashion. I believe they’re still processing that data. I would encourage you to interview Puma Shen of Doublethink Labs.
-
Could I ask what other new policy proposals that you guys are considering now regarding Chinese disinformation and propaganda?
-
Certainly, the propaganda is not limited to disinformation. It could be outright lobbying or outright interfering with the election process. Disinformation is just one of the venues. There’s more than one way to scam without going through spam. Spam is just one of the ways. Once the Internet community solved spam mostly, the scammers moved to some other approach.
-
It’s not like they’re constrained to using spam, as with disinformation. Disinformation is now a generally recognized issue that people have expectation that any message may be potentially a, intentionally or not, misframed. What we call media competence is higher compared to four years ago. This is good. However, there may also be other ways of more covert ways.
-
For example, there may be ways to pay the intermediaries and order them to masquerade as local players and participate in illegal lobbying or in illegal interfering with elections and so on. Previously, only the actual person doing this and an originator is liable to penalty legally speaking.
-
It’s very difficult. This is a throwaway account. Even if you penalize them, the intermediaries can easily find other collaborators. This originating payer is often outside of the jurisdiction. It’s very difficult to persecute them.
-
The anti-infiltration act is designed to make every intermediary in this command chain if they know intentionally that they’re accepting sponsorship and command to do this illegal behavior as liable as the final person doing so. That is another piece of legislation already taking effect now.
-
(pause)
-
This is really helpful. A lot of these policies are very innovative. They seem to affect younger people the most. They were already people who are already better at detecting disinformation. Have you found these policies to be effective at either ending disinformation or raising the media competencies of older people who are usually more susceptible to these disinformation efforts?
-
That by itself is a misunderstanding. People who spend more time online are more digitally literate. They encounter more diverse information modalities. Any digital native may be young, but they may have spent a decade or more in online communities, whereas, an elderly person may be old but may be a newbie when it comes to online communication.
-
Maybe they only had their online account or FB account for a couple years. They’re digital migrants. They’re young by Internet-experience age.
-
No matter whether they’re really young people like 8 years old, 7 years old, who are literally very young but also inexperienced when it comes to Internet or whether they’re 70 or 80 years old that it is their first year participating online democracy and activism. It’s all very exciting.
-
They have much more in common across the different ages than people who have more than one decade of participation in online communities. The media competency work which has its own website, the mLearn website, doesn’t make a distinction between K-12-only curriculum and lifelong education.
-
FB, in particular, partnered with the Hondao Elderly Care Foundation to localize the content of media competence. If I’m a 70 years old, I don’t want to be corrected by the Trend Micro bot all the time. I also want to elevate my social status within my family by correcting my brain tutor’s messages. [laughs] There’s incentive for them to learn in elder care and elder community places.
-
The elderly is not unwise. It’s just they are coming to grips like a seven-year-old to this new mode of communication. They can share the same curriculum materials. The media competence education is a widely agreed and multi-sectoral approach. That really is the real solution.
-
Only when everybody become a capable critical and creative thinker and ensure a biodiversity defense against virus…No virus can eradicate a very diverse field. What we have is a memetic ideo-diversity that no disinformation can provoke outrage so suddenly and so uniformly.
-
It sounds like you guys think about disinformation a lot like a virus.
-
Exactly. You can’t sit down and negotiate with a virus. It’s not in the same category. If we use public mental health as the model then what we can do is to achieve a universal coverage of accurate information and clarification to develop the supporting research like real-time dashboards and so on that offers insight into it.
-
When it’s really serious like foreign-sponsored, hacker-precise targeting political ads during election, then we develop quarantining processes.
-
I took a disinformation class at Harvard most recently with Joan Donovan. I’m not sure if you’re familiar with her.
-
Heard of.
-
We learned that oftentimes the best way to counter or to clarify or to…
-
Disarm.
-
…deweaponize, disarm disinformation is if someone from their camp comes out and says something being like, “This is actually not right.”
-
That’s right.
-
Have you guys been able to – I don’t want to point at any party – work with other political parties on this effort?
-
I don’t belong to any party, so I don’t have other party. The only party I’m affiliated somewhat is the Unstoppable Happy Party.
-
(laughter)
-
I was just on Brian’s Show a couple days ago. In any case, yeah, I’ve been trying a delivery mechanism myself which is exactly as you said. If anyone, for example, on FB or on Twitter…The Twittersphere in Taiwan know that if anyone mention in their Tweet my name it is very likely that I will press like to that Tweet saying that I’ve read it, essentially.
-
This is not counter disinformation. This is to increase proximity with people. It can also be useful if people are spreading untrue information about me. For example, there was a wide disinformation campaign right before the election that accused me of taking down Facebook groups, which is a serious accusation.
-
I have no interest, nor control, nor power into the private-sector company. It’s not like that I mind control Mark Zuckerberg, which is the frame of that disinformation, through literally, mind-controlling superpowers.
-
(laughter)
-
Along with those tinfoil pictures. In any case, the point is that if I clarify it myself, sometimes it just reinforce the message, which is not effective. This is, to me, personal. This is not something that the ministry can come out and clarify.
-
What I try doing is that I just sent private messages to the people sharing those messages. If there’s 100 people sharing this piece of disinformation, I directly message those 100 people. It’s interesting, because my message is very calm and showing just as a plain fact why it’s not the case, and I’m nonpartisan, by the way.
-
Maybe 1 in 10 or so people who receive the private message will actually clarify for me and say that they made a mistake sharing it, or they make an amendment to their post saying that this is what Audrey has to say, at least do a balance report.
-
To their friends and families is their photo making my case. It’s actually quite effective, clarifying.
-
Interesting. You essentially make it more human.
-
That’s right, that’s right. If people say, “Ah, it’s just a bot,” and I sometimes just record a short film and just speaking directly honestly about this. That’s very convincing, because for them, Audrey Tang used to be a kind of abstract symbol, but now is a real human being trying to make connections.
-
(pause)
-
You said about 1 in 10.
-
About 1 in 10.
-
That they’re being the clarifiers.
-
Yeah, that’s right, that’s right. At least it inoculates them against future such rumors.
-
This is really interesting, and I haven’t seen this before. Could you talk a little bit about how you came to this delivery mechanism, like what were some of your ideas behind it?
-
It’s a very old tradition. In the old Usenet days, the old Internet, before the World Web, there was a person that goes by the name Kibo, and there’s this whole practice of looking at each and every public post on the Usenet forums. Whenever anyone mentions Kibo, they go and make a reply, and so that’s called kibology.
-
One of my mentors, Larry Wall, inventor of the Perl language, we worked very closely for many years, do the same when he first invented Perl. He looks at any posting that involves any text processing using the previous generation of tools, and he jumps in and say, “You know, Perl is a better tool for this job.”
-
(laughter)
-
This is the kind of advocacy that immediately shortens the proximity of the Perl, the nascent perl community, and the existing communities may be misunderstanding what Perl is good for. It’s huge success, and people become aware that there is a new tool, and there is a personality behind it, which is sometimes quirky but always very humorous, Larry Wall.
-
That’s become very influential on my work, so I’m following their footsteps while trying to build impersonal connections. I eventually coined a tongue for this kind of interaction. It’s called troll hugging.
-
Troll hugging. That’s funny.
-
Like hugging a troll, hugging trolls. It’s my hobby.
-
[laughs] Have you thought about how to scale this?
-
Mm-hmm.
-
This seems to be a very effective but also time-consuming.
-
You can semi-automate the process, of course.
-
(pause)
-
Is this being used now government-wide?
-
There’s quite a few attempts. For example, a couple of years back in the Civil IoT data application contest, there is a very similar idea developed in 2018 by the authors that won the top prize of that year, and they work was environmental protection agency to look at PTT posts that talks about air pollution.
-
Most of the time those air pollution posts are not inaccurate, but they’re using information or data that was a few years back. It’s not strictly…It is like mal-information. [laughs] It’s not untrue by itself, but the title is not true. There is some disinformation during election of this shape.
-
For example, showing a video of people protesting in front of Presidential Office and accuse the media for not reporting it, but actually that protest was a few years back, but it pretend it’s a live stream. It’s actually a very difficult to clarify kind of message, because that actually happened, so… [laughs]
-
It actually happened, just not the right year.
-
That’s right, [laughs] that’s right. What they develop is a bot that see this kind of posts on PTT and automatically make replies and use visualization of the real kind of air quality at time, and add to it a very provocative title, a clickbait title that makes it more viral than the disinformation.
-
They generate this clickbait title by comparing the number of likes of all the environment-related posts, and compare it with the press release around the same time by the EPA. They built a machine translator that translates from press release to clickbait titles of PTT. [laughs]
-
That’s funny.
-
The model that they build, theoretically, is two way. You can also turn clickbait titles into press release, but I am not sure what’s good, what’s bad.
-
(laughter)
-
There’s a brilliant piece of NLP. The environment protection agency in their clarification messages learned a bit from the Civil IoT contest. I think this idea, while not quite commonplace, is taking root, because there are certain kinds of fabrications of this kind.
-
That’s very difficult to clarifying like right or wrong basis. It can only be done by building a more emotional connection to people.
-
(pause)
-
Correct me if I’m wrong, but it seems like you guys are seeing how things go viral on the Internet and using those tools.
-
That’s right. We’re memetic engineers, too.
-
(laughter)
-
(pause)
-
My last class with Joan Donovan was called Memetic Warfare. [laughs]
-
That’s right.
-
The last question I want to ask you about is I think on the very forefront technology is deepfake. Have you guys thought about deepfake and ways to counter that?
-
Yeah. It is very relevant, because FB just discovered quite a few number of fake accounts that are having their avatar photo are deepfake images, and they were only able to discover it because some of the glasses reflects life funny.
-
(laughter)
-
They trained their own AI algorithm to detect those deepfake photos, and that’s how it was discovered.
-
This kind of attribution and making a common knowledge and spreading some artistic and harmless deepfake, that’s how vaccines work, exposing people to a harmless version of a virus, making people aware that such things exist, and ask twice.
-
First, where’s the message from, and how do you make the credentials, how to attribute properly. Taiwan is pretty resilient, because we live with the animated news, 動新聞, for more than a decade.
-
(laughter)
-
People know that motion capture can do wonders. It’s just previously you have to build a large studio and have a “Lord of the Rings” post-production facility to make a convincing Gollum. Now, everybody with an iPad can do so, because computation power has become much stronger.
-
People are generally aware that motion capture videos are there thanks partly to the animated news. Popularizing the fact that deepfakes exist, and sharing with people that we can convincingly synthesize in the research community not only images and writings, which has been solved long ago, but also video with voice.
-
That was difficult last year, but this year become commonplace. Just raising this awareness is plenty for them.
-
If I can have one follow on. A lot of policies is about framing, and it seems like US have framed disinformation as…
-
…public health issue.
-
As a public health issue. Where does that come from, and what was the thought process behind that? I think a lot of people in the US maybe it’s just in our culture we see as a warfare issue, [laughs] but you guys see it as a public health issue.
-
Ideologies are in Taiwan like part of bread-and-butter for political conversations. Every word, especially in a very poetic language based on ideographs, every word have five different associations.
-
To engage in politics is to engage in poetics. This kind of poetic reinterpretation of old ideological terms, this kind of mimetic variation is just basic political awareness including the term Taiwan itself.
-
What we are trying to do here is to make politics out of the various different cultural interpretation of a shared tongue, and move in a transcultural way that looks at one part of the Taiwanese culture from the other viewpoint of another part of Taiwanese culture, and try to build common values out of very different culture lineages and positions.
-
This continuing process is what makes us see those ideological tensions more like the earthquakes, they’re literally a clash between tectonic plates. It’s not warfare, but rather just a shaping process that raises the Jade Mountain two centimeters every year. People see it as part of politics instead of necessarily as warfare.
-
Public mind health is of course very important if you are going to do politics in this kind of environment. This idea of right or wrong, left or right, or this binary thinking is of course present in every election, but there’s also core consensus.
-
If you interview the most ardent supporters of President Tsai and Mayor Han during their final rally, both of them still agree that the democratic process of getting collective goals out of different ideas, and the need, the necessity to share and build closer connection with the entire world with even more different cultures, that’s the core thing that both camps can agree with.
-
If you have that kind of a culture, it’s very easy to see this done in a “virus of the mind” way instead of a “good forces versus bad forces” way.
-
Public health mindset almost brings people together.
-
That’s right, because anyone who is infected with flu also threatens other people, but they’re not bad people. They’re just sick, and sick not as immoral, but just in public health, and they may get cured.
-
The common flu a week is gone, but during that week they may have infected other people. People may realize disinformation’s actually intentionally wrong a day after, but they have already shared it to other people, so there really is structural similarities.
-
(pause)
-
That’s fascinating, because in a sense you take out the polarization, which is what disinformation is supposed to do.
-
That’s right, exactly. Exactly, and doing so in a humorous way.
-
(pause)
-
Thank you so much. This is super helpful.
-
Thank you.
-
I really appreciate this. I really appreciate it.