If they detect some misinformation that’s already clarified by the Cofacts, the bot just push out a message saying, “Oh, you sent this, but this is already clarified and this is what Cofacts has to say about it.” Cofacts is not just about individual people reporting, but also about a ...
For example, in the LINE, you maybe have a group with 10 people in it. You can invite Trend Micro in. If you invite Trend Micro for each message everybody send, Trend Micro compare it to the Cofacts database, but it doesn’t send anything. It’s not locking anything. It just ...
The important thing here is that Cofacts is not just a standalone tool. It’s also a database for other people to use. For example in Taiwan, we have an antivirus company called Trend Micro, which is pretty famous. The Trend Micro also makes a bot. You can invite the Trend ...
Last time I visited Thailand, I actually talked to many people about it. There’s a friend called Sunit running a social innovation collective. He visited Taiwan to serve as the judge of our Asia Pacific Social Innovation Partnership Award. During his trip, he visited Johnson and the Cofacts team to ...
It’s just a piece of information that’s forwarded to the Cofact. The Cofacts only learns about who flags the message, when was the message sent, the content of the message, a little bit of metadata around the date, and so on, but they don’t learn about our chat history. That’s ...
It’s not necessarily personal information. This is mostly just a virus that uses our trusting relationship as a way to propagate. Just like chain mail, this is not a new thing. Because of that, when we flag a piece of information as spam, we’re not saying that we’re sending the ...
If you send me an information, of course, that’s just between you and me. If you send me a piece of information which you will see elsewhere, you’re just spreading it, because you see a piece of information, makes you angry or something and you don’t actually check for the ...
Yeah. The idea is that people can flag anything that they consider as misinformation or disinformation. When sufficient amount of people flag the same piece of message, then it gets a higher score on the digital accountability system. The fact-checkers can focus more of their energy on the one that’s ...
I think if you want quantitative numbers, like whether it’s growing or whether it’s shrinking and so on, you can look at both the Cofacts database, as well as the overall LINE dashboard.
Exactly the same way, people can report a piece of information as misinformation or disinformation by forwarding it to the CoFacts bot. Or, now in LINE itself, they can also just report it using the official LINE fact-checking bot.
I can send you the URL later. We are not talking about a general impression, but we’re talking about just like spam email. If you ask how many spam emails are there, of course, you’re going to ask Spamhaus which has the signatures of people who donate their email message ...
All those four fact-checkers receive a collaboration from the LINE Corporation so that people can flag anything as rumor, as spam, or something, and then the LINE forwards it to all the four fact-checkers. Each of them can contribute back to the LINE system. The disinformation reported can be seen ...
Huan-Cheng. It’s Johnson Liang who came with us here, with the intern nicknamed Mr. ORZ. He was one of the founders of the chat bot called CoFacts. There’s also many other partners. There is one called MyGoPen. There’s one called Rumors & Truth, and also the Taiwan Fact-Checking Center, which ...
The Cofacts team.
In the collaboration that the LINE company through their corporate social responsibility, they publish a digital accountability dashboard to highlight the most viral rumors, misinformation, and also disinformation on their platform. They work with four fact-checking partners. One of them is actually in Thailand. They arrived with us.
This is not to say that all that elderly people read is LINE, but they spend more time on LINE proportionally compared to young people. That’s what I was saying.
No, what I’m saying is that disinformation is in all channels, but the elderly prefers to use LINE. The young people may have many different social media applications on their phones, such as Instagram, Facebook, Twitter, and so on. It’s very likely that the elderly people only have LINE in ...
Mostly, the elderly rely on LINE for their everyday communication. Whenever there is some disinformation or rumors, it’s very likely for the elderly people that they receive it from the LINE channel first before any media institutional or social.
Because of that, they’re very eager to use Internet. However, they don’t use social media that much. For the elderly, the users of LINE, which is an end-to-end encrypted messenger which is not really a social media, is much higher than the social media such as Facebook or Twitter.
The elderly actually use Internet very actively in Taiwan, partly because we have very affordable broadband access, but also because the literacy rate is also very high.
Taiwan is very quickly becoming an aging society. We’re roughly three or four years behind Japan, which is already an aging society. We’re catching up really fast, let’s just say that.
Thank you.
The other thing that you might or might not want to include is that we refrain from using the word “fake news,” because news and journalism in Mandarin in the same word. There’s no way to say the F-word without offending journalists. Both of my parents are journalists, so out ...
Everybody in the chat group, instead of thinking that, “Oh, there is messaged disappeared by censors,” they would instead learn about something, about the journalistic value of fact-checking and so on.
Many other jurisdictions in Asia implemented a certain amount of administrative override to journalism and free expression because of disinformation crisis. Our way of innovating, I think, commits to the free and open value by issuing, instead of takedown, real time clarification, like the Trend Micro bot.
The minister’s words never sits above a journalist’s word, and we never take away journalistic freedom by, for example, issuing takedowns for journalistic output. According to the human right society, Civicus Monitor, we’re the only jurisdiction in Asia that completely implements this stance.
We’re just people who provide a real time clarification, sometime in a very funny way, for the journalists to work with. The legitimacy is in the social sector and in the journalism business. That is the first thing.
The other thing is that we say to the journalistic community that we are partners, that we’re doing this to say that we’re encouraging your fact-checking efforts by first, never call ourself, the administration, the fact-checkers.
This is not about free speech if somebody, intentionally through sharing of data and information that are simply they know it’s not true, that endanger other people’s lives. That, of course, is clearly outside freedom of expression. That’s the legal concept.
In any case, whenever this kind of issues happens, we understand that democracy builds upon the health. Literally, the health of people. Democracy should prevent this kind of disinformation from spreading.
We’re just saying whatever previous acts that discourage people or penalize people for spreading misinformation when – SARS, for example – an outbreak occurs, which it actually is doing in our nearby jurisdiction.
This intentional public harm, all built upon existing legal concepts of the pre-digital media legal system, so the court can very predictably make judgments based on these criteria. We’re not inventing novel legal concepts for disinformation.
That’s a great question. I think a clear legal definition of disinformation is the first step, because without such a disinformation definition, it’s very easy to be politicized as a term. We mean disinformation as intentional untruth, intended to harm the public, not just an image of a government, which ...
Sure, of course.
Maybe one more question.
That’s FB, what they have done. They set up a war room to ensure a rapid response to such divisive or counterfactual advertisements. That’s what FB did. Google, for example, along with Twitter, simply said, “OK, maybe we don’t run political and social advertisements during your election.” That’s fair, too.
That’s what Facebook have done in the ads library, so that if there is a candidate that use this hyper-precision targeting, spreading disinformation to discourage people to vote, they can be called out within one hour and face social sanction.
These are filed as people friendly to the candidate, voluntarily buying, huge number of money, into precision targeting. We think that’s violating our norm. You have two choices. First, you can agree to open up again in real time whatever precision targeting terms and criteria is being put forward in ...
We just told the major social media companies, saying that, “This is the norm in Taiwan, that we expect democracy to work like this.” We see clearly from the Control Yuan data that there are certain expenses that are for precision targeting in your social media platform that are neither ...
Not just the statistics, but available for independent investigative journalists, as well as data scientists, to draw conclusions based on political contribution. This is similar to the US of honest advertisement when it comes to campaign financing, but in real time, and in raw data form.
They also have incentive to learn this kind of media literacy. Our Control Yuan, which is a separate branch in the government, establish a norm where all the campaign donation and expense is transparent, down to the raw data level.
Like people in their 80s and so on, who perhaps don’t like being corrected by the Trend Micro bot in their family chatrooms all the time. They also want to be a contributor to correct their grandchildren’s messages. [laughs]
What they have broadly agreed to the norm is that one of not only transparency and accountability, but also life-long education and empowerment. FB, for example, partnered with the Hondao Elderly Care Foundation to make sure that the digital competency information packages is not just targeting the young people, but ...
Sure, and my colleague, Joel, can send you what we call a norm package, which is a counter-disinformation, self-regulating agreement, a kind of a pact between the likes of Google, Facebook, Yahoo, Line, PTT, and so on.
The Trend Micro bot built this kind of balance, again, in a very unlikely place, which is group chat. That is an important contribution.
It already maybe reinforce their thought patterns. Within split second, then it is an idea in, very old idea in journalism, called a balance in perspective and report. If you have one source, you need to check the other source.
If people receive a disinformation, go to sleep, and wake up to see the clarification, that’s of little use, because the disinformation already framed their understanding. It’s already written in their long-term memory.
That bot scans each incoming messages as an antivirus program would do, and compare it with a crowdsourced, what we call Cofacts, collaborative fact-checking, ecosystem. So that if the message being shared in the group is a disinformation package, that bot, within split second, responds saying, “This has been clarified, ...
Then they also developed a chat bot, a Line bot, that scans each incoming messages. Like if you have a WhatsApp group, actually a Line group in Taiwan, because people in Taiwan use Line more. You invite that bot into your chatroom.