But the idea is that when I sign official documents, there’s no password anymore. Biometric is verified by this on device final implementation, the device integrity verified by crowdstrike, the behavioral pattern analyzed by a cloud layer, and we never go to the same vendor for two adjacent layers, not only to avoid vendor lock-in, but also to avoid the Microsoft hack thing, the vertical integration, just sabotage the whole zero trust architecture idea because the root, the active directory was hacked.
Online or app-only layer: Even if it begins to answer, the response is overridden.Built-in censorship in the offline model: As mentioned, in the offline version it still has triggers for certain keywords or historical events, causing it to skip its normal thinking process and produce unnatural evasions. Unsurprisingly, this undermines trust ; people may feel the censorship is too pervasive and not want to use it.Underlying ideological perspective: In its “natural” mode, it will answer, but the answer reflects a particular ideology.
But yeah, the some of them, I would say it’s not just a precondition, it was also the result of the decades work. Yeah, because ten years ago, we were at the cusp of very polarized society. And I mean even before the Sunflower movement, there were a lot of distrust of the government and the state apparatus. And there was a real danger of being polarized the way the US is right now with the administration enjoying 9% of trust from the citizenry.
Yeah. Yeah, no, completely. And a space that we’re trying to move more leaders into as well as we’ve… And we’re looking at actually… we’re taking a pause in our program, a strategic pause to look at the curriculum that we’ve designed internally. And it’s all based on, you know, a core set of values and using values to find common ground. And then different topics on top of that, right? AI, finding disinformation, communicating across difference, trust building.
Right. The primary goal is transformative instead of just deliberative or generative : people feel agency because they can help set the agenda—owning a stake in the policy outcome. You only need this when the relationship is frayed. If the government already enjoys high approval, you don’t seek to transform the relationship; you keep it resilient, not disruptive. That’s why this model works best in “warmer” times—moments of urgency that motivate everyone to invest in horizontal trust .
So, to set up ultimate threat indicator sharing such that within a minute or so that once we figure out what’s going on and we have set along an emergency way to resolve this or to mitigate a threat, how quickly can all the allies deploy it? And at the same time? That is the main metric we’re measuring. And I think having the code exercise taking place physically where people can meet face-to-face, that is very important to increase the trust .
…Open-source model upon which the National Science and Technology Council is building the Taiwan Trustworthy AI Dialogue Engine upon. So, we already have a branch of open-source models aligned this way with alignment assemblies. But if more than OpenAI and Anthropic, if PaLM and everyone is on board, then we have a real chance of establishment continuously, democratically upgraded guardrail, which I believe is one of the keys to avoid existential risk by this suddenly widening gap between the half AIs and the half nots.
But still, I mean, our main digital communication infrastructure is provided here in Europe and possibly everywhere mostly provided by some American big tech companies. Since already 20 years or 15 years or whatever. How can Europe be motivated or what could be the incentives to design their own market for such technologies promoting trust and also promoting consensus on also showing plurality of opinion as a chance, as something good? Um, how can we get there to create a market in Europe? How… where should we start?
I totally agree. I think the relationship between civil service and civil society is another of these absolutely fundamental relationships where the trust has to be rebuilt. It speaks also to this underlying philosophy that is there in Audrey’s work and that she’d actually, when I brought her over to the UK, had just put out into the world in the form of a book co-authored with Glen Weyl called Plurality . I decided at this point to steer her into talking about that.
So there's the technology itself, like what it does, but then kind of separate from that is the people's trust in the technology. And you said since you did it for 10 years in Taiwan, there was like a social approval because people were used to it. What's the threshold beyond which people believe this? Like, could this happen in the United States now on some issue that isn't existential, but is interesting to people and relevant to their lives?
So there’s a diminishing return, even if we all agree that it’s better to listen to two million people compared to 2000, we can’t actually digest two million people’s voices, which means that the majority of those two million people will feel disempowered, actually. They gave us all the suggestions, but we don’t have the bandwidth to digest it, which is why most policy makers kept to the Dunbar’s number, around 150 trusted experts, or MPs, as the voice of the people.
Yes, and it sometimes impedes adoption of technology for governance. Technological solutions, such as the use of AI to detect abnormalities and drones to inspect infrastructure, for example, are enabling machines to replace governance by humans. On the other hand, governance based on these new technologies takes time to gain trust from people, which makes for delays in the revision of rules. What do you think is necessary to promote the adoption of new technologies for governance? Especially on dialogue between policymakers and engineers for forming optimal rules?
Definitely. This is in fact, as you said, it’s about going across the people private, public, the government, bringing everyone into it. I love what you said about creating different habits, as well. As you said, we’re being attacked by this virus, be it COVID. Digital, we need to find new habits, new ways to do that. Thank you. Thinking about the next 18 to 24 months, what do you think will change or have an impact on how we treat our identity, trust and security.
Yes. I had a very good education and experience in Dudweiler near Saarbrücken because it’s close to the France (34:30-34:33) border. So, we had a bilingual education actually. And it’s very nice because even though I don’t speak any bit of German or French when joining, still at a very handicapped place to start, the people were very nice to me and they trusted a child as if the child is an adult. This is very different from the Taiwanese culture back then.
Then it was a couple of years after that, a couple of years ago, that another opportunity came along, and that’s called the Asia Pacific Internet Development Trust . This is a joint project, with WIDE project, with Jun Murai in Japan, and it’s got a large endowment which is invested for a continual return, which is also providing some millions of dollars per year for more of the internet development work that either APNIC does or it’s done by the foundation or by a wide project.
We won’t do any interoperating if it doesn’t uphold our rigorous privacy bar and that means message contents are encrypted with the protocol and metadata is encrypted using the techniques that we use to keep it private. Now, how do you interoperate without sharing metadata? This is a very difficult question. And then how do we trust that once we’re interoperating that our privacy promises are being adhered to on the backend of the other side? These are not questions with clear answers at the moment.
Just to jump in this private-public issue; In one of your interview, you said, - it was very interesting for me -, that in Taiwan there is a collective priority of rebuilding strong mutual trust between the government and the civil society.This is something that a European nation, especially people who worry of the private sector having too much control of the government, could perhaps look into. I would like to ask you how post-pandemic democracy could limit the influence of large private coporations in decision-making?
At that point, of course, because it’s already the norm, anything that disassociates itself from the norm — like saying, “No, I don’t think insurance is important” — there will be tremendous cost to the social trust and capital if any stakeholder actually goes against this crowdsourced broad, rough consensus. Mostly, they offer technical compensation and things that’s required for them to implement this, but nobody actually goes against the crowdsourced agenda, saying, “No, this is not important” — because obviously this is important to people of all different stripes.
Minister without Portfolio Audrey Tang previously served as an external expert to Taipei City in 2015 to make the budget system more visual. The bureaus and departments were very resistant at the outset, but after some education and training, they asked their staff to reply to hundreds of online questions one by one. This created quite a stir because city residents discovered they didn’t have to go through the media or public representatives to have direct exchanges with government officials, and it deepened the public’s trust in those officials.
…It can only run on the server you trust . You still do the typing together, just like Google Doc, and I designed this spreadsheet with Dan Bricklin, called EtherCalc. It’s like Google Spreadsheet, but the difference is that it’s on a server that you trust , you control, and at any point that you can download everything. It’s called data portability, and then put it to the friend’s server. If your own computer has a hardware problem, you can migrate with no problem at all. The point is that…