I think all these individual elements are easy to adapt in other jurisdictions. Taiwan model is not a one-size-fits-all thing. It’s rather a very gentle idea that if government trusts their citizens more, citizens can innovate better than governments. Then we apply it in various ways – fast, fear, and fun.
Ideally, we don’t select, over time, as many people that fit that box. Then ideally, they have a bunch of awesome people that they wish were a part of this network as well. By just the fact of having someone who’s trusted refer someone else, we give them, again, a low bar.
We see that when we put randomly selected people in a room, in fact, they create intelligent solution and trust . This is true at local, national level, but it can also be true at global level. I have two points for you, Minister, but it’s also two points for everyone in the audience.
Before the pharmacy statistics were published maybe every week or every day, but the people in the g0v movement thought if you publish it every 30 seconds, then it becomes a distributed ledger that everybody can participate to guarantee the correctness. People don’t have to trust blindly the government or the companies anymore.
It’s all very civil. People converged on the shared vision four demand, not one less. Anyone who participated in that changes from within so that they are much more willing to trust that a bunch of strangers in a well-facilitated place can produce something like a rough consensus out of differing positions.
It’s also a social problem. In Taiwan when we tally the votes, opening the ballot box and taking it out for everybody to see, we actually allow live streaming and recording it from different directions. That’s a very strong way to make social trust possible, and the fairness and openness of elections.
According to the Taiwan Fact Check Center, there’s many different types. One is one that try to undermine the public trust in journalistic institutions. This is perhaps, I would say, one of the most serious kind because the journalism training is the main thing that disambiguates misinformation, as in misinformed, from intentional disinformation.
I’m working in a very international way, making a facility to show that, instead of working for any particular agenda or for any particular government, I’m rather just working with people, in Taiwan. I guess the public sort of trust me most so that I can do most of the research here.
This is not just about blacklisting. This is also about whitelisting, like promoting the component that pass the lab test here, that are originated from here or from trustworthy partners, and things like that. We build also a brand of ourselves, like we’re battle-hardened, because you don’t have to test it.
There’s entire reports about the Russian manipulation on the US parties, so I don’t have to repeat that research. Finally, when you see this information packaged, that’s already the last stop in their work. They have already identified the precision target criteria needed to maximize this trust for these particular people.
Like every Wednesday, you can find me in my office. Every other Tuesday, I just tour around Taiwan to meet the people. I think it’s just, trust is something that you just share experience in a short interval. You have a friend where you meet every week for a movie, basketball, or whatever.
There’s a lot of mutually trusted friends like a few New Zealands, like Richard Bartlett, who, at the moment, is in London. He’s just wandering around and taking the vTaiwan and Loomio... He’s a co-founder of the Loomio project, which we draw a lot of inspiration when working on vTaiwan.
One of my friends in the Perl community happens to be a project manager when Apple first acquired Siri, working on language technologies. He also wanted to get his doctor’s thesis worked out in CMU, so he needs someone he can trust to help with the team while he works on his PhD.
You don’t have to take the government’s word for granted. You can still say, "The government is probably lying. [laughs] We trust this." It’s within your journalistic discretion, but at least the government is not being unresponsive, like takes 7 days, takes 60 days just to catch up on a accusation.
Then suddenly people who shared the same affinity to a keyword just started trusting each other in a very quick way, and then it scaled out. It’s not scaled deeply, nor has it really scaled up the number of people. All it did is really scale out this shallow, not quite listening, listening.
Still, it’s useful because sometimes what we are interested in is the questions they want to ask. These things that they want to put to the agenda. Maybe they don’t trust the government enough to identify them self, or maybe they don’t want to go through the hassle of identifying themselves.
Thank you so much. It’s so lovely to talk to you. I feel very inspired, and you’ve given me a lot to think about in terms of my own work and this approach of trust , particularly in the face of fear. It’s a big lesson for me, so thank you.
Part of this is the idea of peer-to-peer governance. The word "command" is the antithesis of a peer-to-peer relationship. As soon as you give command, that person is not your peer anymore. In order to include more stakeholders in the dialogue, the government is now learning to trust people more.
Let me ask you about AI. So, AI is clearly helpful in this project. Would you ever trust an AI to actually make decisions? To take all the info from Bowling Green and then just say, “You know what? We’re going to do this.” Will we get there? Do you want to get there?
Because the reason is if you defend only a little bit of very highly sensitive data, for example, the passkey or the biometric chip on the device or the zero trust architecture, then the surface you defend is just your fingerprint, your device, your connectivity. So it’s easy to focus the defense on this.