Back at the time, I was very interested in assistive intelligence, that is to say, AI that helps people understand each other and build trust with each other. I was doing research, and I told the head of school, “Look, my textbooks are all 10 years at least out of date. Do you want me to stay in the school system, in the institution, or do you want me to do some research?”
Why Taiwan sounds almost too idealistic, it’s because we reject that binary, linear thinking. We see it more like two axes. On one axis is the state’s capability to trust its people on social innovation, and the other axis is the economic sector like the so-called national mask production team, how much they are willing to work in a pro-social way. At the end, both serve the social sector.
Otherwise, the people who make that decision is at a disadvantage, vis-à-vis the part of the liberal democracies that did not make that decision. I think that’s been the argument for OPIC and JPEC as well, like for foreign aid joint projects. We need to make an entire stack that are all from liberal democracies that can trust each other’s cybersecurity lab for tested components, and things like that.
It’s handed out by the state. The software is written by CHT, the Chunghwa Telecom, which is a almost state-owned but not quite telecom operator. People generally trust the CDC, but because it’s opt-in, if you don’t want a digital certificate, you can use other means to file your tax. There is no mandated requirement that you get a EID. You just get one if it suits you.
It’s one of the neutral roles that g0v is playing, along with the pro bono lawyer team and the pro bono medical doctor team. The communication team, the g0v team, was also one of the three neutrals during the occupy. We were given a lot of trust and legitimacy, and provided the occupiers, but also the counter-protesters, as well as all the NGOs, tools to do consensus gathering for their work.
That’s the main issue and that’s why we’re bringing out citizen participation platforms, to increase trust , not only NDC and not the extended units, in general. What they will obviously do for them is as you said to turn around the face culture to make sure that when they release say our Economic Ministry releases data independently, it’s a way to regain the face instead of to lose face.
So g0v is a way for us to get these three groups of people and to form a series of spaces, with open space technology — online and offline — to become an organism, so that we learn from the activists and the media people what are really in the public interests. The media people learn from us on how to actually make an impact, and activists learn from us on how to trust strangers.
Yes, we can come back to the legal definition. It’s like the definition of a lawmaker, so I will trust it, because you know what it is. [laughs] But we have also seen another kind of people who want to make this a space of insecurity, to use this space to upload illegal stuff and downloading them, and also hacks into systems to reveal private information. You mentioned it yourself.
However, if we steer this toward assistive intelligence, we can narrow the divide. We can run open, Civic AI trained on local languages and local cultures. Japan and Taiwan are doing this—Taiwan with the Trustworthy AI Dialogue Engine (TAIDE)—to ensure digital competence is distributed horizontally, like a school of fish. This empowers smaller nations to build their civic capability, turning technology into a ladder for equity rather than a fence of exclusion.
People will increasingly attribute experience — even qualia — to AI agents; some will argue for their consciousness and moral standing . As vertical authorities (ministers, scholars, journalists) lose automatic deference, because anyone can summon agents that speak with tones of authority, we must tend the fabric of trust horizontally: peers who share language , context and evidence .
It’s great if you have your local facilitators and so on, but actually the same facilitator cannot facilitate an adversarial group. A community that’s hostile to this community. But they have their own trusted facilitators too. So if they independently do this, we can actually give those bridge making narratives to both. Or even to your congress people. Saying that here are the things that you promised us, people can live with.
Again, it’s very easy to build a swift trust of people who share the same hashtag, even though you haven’t met them before. It’s two sides of the same coin. What we are now doing is basically ensuring that you can very easily get access to ministerial people. For example, this is my office, my real office. This is just a meeting place. This is the Social Innovation Lab in Taiwan.
This is why always in e-participation, like in Slido, you can participate as an anonymous person, as a pseudonym, or anything in between. We’re not asking anyone to disclose their real name or identity. It’s just a consistent handle that we can have a back-and-forth conversation on. That’s all we ask. This has nothing to do with surveillance and has everything to do with building trust over time.
That’s true. I think it’s a kind of modestly holder collaborative governance that we will hope to have, which should be inspired by an open approach to engender their trust . As you say, if nobody wants anything and everybody wants everything. It’s that kind of spirit. I think hopefully we’ll be able to also take that spirit and start the NAB activities and the social finance activities in that direction.
For example, I can give you example of two questions that instantly raise people’s level of function. It doesn’t necessarily stay there when you go away. You can let’s say work with a group, raise their level of function, get them to connect with each other, build trust , make a decision and carry an action. Then even if they revert to lower level function, the action will sustain them over time.
When there is a much shorter time space, like there’s climate change, we need to stop it...In five years, we need to agree on something. Otherwise, we better start sending out spaceships. Then we don’t have the luxury of this building trust across generation worldview. What I’m saying is that it’s really the same process. It’s just how zoomed in or out you are on the time scale.
The reason we set up the AIEC, and we correspond closely with the US NIST AI risk management framework and its task force, and European counterparts with the ethics guidelines for trustworthy AI, the UK counterpart, the AI Safety Institute — the list goes on — is that we’re like crossing a kind of frozen sheet of ice above a river, and we don’t quite yet know which place in that ice sheet is fragile.
Polis is very good in finding those CBMs across mutually not very trusting parties, for example, Uber drivers and taxi drivers. Then the CBMs, for example, not undercutting existing meters, are realized first before we can even implement any laws to mandate top-down enforcement because each party do actually want to extend kind of an olive branch to each other. The other time that we’ve run this internationally is the co-hack hackathon.
Part of the main agenda is to strengthen the democratic institutions, not just in our country but also worldwide, to move toward zero- trust architecture, move people off passwords, offer stronger authentication online and so on, and just to develop the awareness that this kind of thing is going to meddle with our democratic processes. So this is the first thing, to raise awareness of the cyber and election threats that gen AI is posing.
We already adopted what we call a zero- trust posture, meaning that we’re shifting away from passwords. Passwords are very easily phished or scammed or things like that. And after they gain the username and password, they can escalate its privilege and then post hate messages or some other stuff on the website and so on. So, phishing passwords is often the first stage in having a cyber-attack that leads to manipulation, right?