-
So, yeah, why don’t we just start with your agenda here in New York this week.
-
Sure.
-
What are your main objectives? I mean, you gave this address…
-
…at Concordia.
-
At Concordia about AI and how Taiwan can contribute to how the world thinks about AI. But maybe you can walk me through the top line of what you’re looking at and talking about with other leaders this week.
-
Yeah, certainly. So as you know, we have an election coming up next January, so we will be the first in a long line of democratic elections throughout next year and the year afterward. We see that with generative AI, especially interactive deepfakes and precision persuasion… Previously, the cyber attacks and so on were the purview of very resourceful, state-backed actors. But now with generative AI, the cost drops a lot. And so anyone can mount this kind of persuasion attacks or manipulation attacks at virtually no cost — just look at voice cloning scam operations and fraudsters and so on.
-
Part of the main agenda is to strengthen the democratic institutions, not just in our country but also worldwide, to move toward zero-trust architecture, move people off passwords, offer stronger authentication online and so on, and just to develop the awareness that this kind of thing is going to meddle with our democratic processes. So this is the first thing, to raise awareness of the cyber and election threats that gen AI is posing.
-
Now also to raise the awareness that it is actually possible also to use gen AI for good by running alignment assemblies that gets people talking about not just how to regulate AI, although that’s the first thing we talk about, but also about any policy issue that goes beyond traditional polling or voting or referenda.
-
This is the kind of town hall or assemblies that can capture all the nuances of what people have talked about without losing the nuances, compress it into a model that can be used to not just align future models, but interactively talk with policymakers so that we can say, “Oh, this group of people has this consensus,” without overly compressing it into just a few paragraphs of an executive summary.
-
And we think this is going to be very helpful to deliberative democracy and community norm setting, on a lot of matters. So these are the two main messages that we’re sending. Now there’s a technical part of it also, which is the assurance framework, testing and verification. We’re developing alongside NIST and other standard organizations around the world to make sure that when a lab says that their model is not going to harm democracy and can help democracy, we can verify that what they’re saying is true.
-
Okay. So where does this fit in with Taiwan’s exclusion from international institutions? The UN is the biggest one. That’s sort of the focus of Taiwan’s diplomacy this week. Will it be difficult for you to make progress on that agenda, given the fact that Taiwanese nationals aren’t even allowed to step foot on UN campuses?
-
Yeah, I think there’s a lot of talk in the sidelines, right? And as I mentioned in my talk, we toil tirelessly from the sidelines. But I think our message is one that resonates, that when AI harms democracy, we can counter that harm with more democracy and AI-assisted democracy. So we’ve seen many top labs resonating with that idea. As I mentioned in the talk, we already partner with OpenAI and Anthropic, and from the response at the Concordia Summit, it seems that Meta and Google, also our recent meetings, are quite interested in helping out as well, especially around the testing and verification part.
-
These are not traditional bilateral or multilateral talks, it’s not like those top labs are sovereign countries, but they are interested. And in a multistakeholder setting, I think this message resonates very well. And I think UN is evolving into a hybrid. There’s a multilateral part of it, but there’s also a multistateholder part of it as well. And we’re working with the multistakeholder groups.
-
Okay, so you said Meta in Google, and that’s just from conversations you had today.
-
At Concordia, and right after Concordia.
-
This is very recent.
-
This is super recent, all of this is today.
-
Wow, so you’re very optimistic that Taiwan will be able to partner with these companies.
-
We’re quite optimistic that when we set up around the end of the year, the testing and verification framework, assurance labs, as part of our cybersecurity institute, It looks like those frontier model labs are quite willing to participate.
-
Okay. So, I mean, if you were to put it in, like, very plain English, you know, what Taiwan would do with Meta and Google, how would you describe this initiative?
-
Sure. So, I would say toward end of the year, the Taiwan National Institute of Cyber Security is going to partner with NIST and other standard bodies in developing advanced evaluation, including testing, verification, and so on, frameworks and capabilities to test out those frontier models.
-
We have received quite positive responses from the frontier model developers that they’re quite willing to be part of this arrangement so that the threat assessment, the latest red teaming results, and things like threat intelligence can be shared. Like in the cybersecurity world, we have such emergency response teams that share those vulnerabilities and so on. And we want to apply that model to the frontier models as well.
-
Okay. All right. So back to the election disinformation piece. What are some of the biggest narratives that you’re seeing out there? What are the most striking ones?
-
Yeah. So we use the actor/behavior/content/distribution ABCD framework, instead of mis- or disinformation which is on the content or distribution layer. But with generative AI this year, we’re seeing that mass manipulation or persuasion doesn’t need a piece of misinformed information. It can base entirely of news, actually, journalism outputs, just with a different nuance, a different opinion tacked up on it, and so on.
-
The traditional ways of human fact-checking doesn’t address those new attack vectors, because these are not rephrased as news, but rather as individualized messages, each one tailor-made to the profile of the recipient. So it’s like a mailing list that sends a different mail to every receiver, tailor-made to them. And none of this is distributed publicly.
-
So to counter this kind of – the closest are scam and phishing, spear phishing or scam calls. And this kind of phishing-manipulation attacks can only really be countered by raising awareness among our citizenry that such things are now possible. Someone who can voice clone me and talk through telephone with you for hours, as if they were Audrey Tang, is a robot. There exists an almost zero-cost way to do that with thousands of people at once. We need to get that fact out and to educate the citizenry that these kind of things are happening.
-
Wow, so you’re talking about real news that is manipulated using deep fakes and other–
-
Yeah, because closer to campaign, there’s a time there would be canvassing calls and things like that, right? It was just that it was very expensive to have human teams that do that. But now with interactive generative AI, these tasks can be done en masse at a very low cost.
-
And you’re seeing that already being used in Taiwan?
-
We saw that in Taiwan being used for fraud, scam calls. So they would, for example, call you, and you pick up and say, “Hello,” and so on for three seconds, and then your voice print is cloned, and then the fraudsters will use that voice print to call your friend and family and say that I’m in dire need of money or things like that. And that is partly why Taiwan already passed a law amendment that holds Facebook or other platforms liable if they allow this kind of deepfaked investment advice, sponsored ads, that connect to this kind of deepfake image or video or whatever interactive forms.
-
And if they get notices and they do not take it down and somebody gets conned and lost a million, and according to our new law, Facebook is now liable for that million as well, which is why they haven’t been fined because they were very – their civic integrity team has been very cooperative after the law passed. But that is a real threat, which is why we passed the law amendment.
-
So – but you haven’t seen it in the electoral context yet?
-
We’re not very much near to the election season yet.
-
Sure. But you expect that it will be used?
-
Well, we expect that if we do not raise awareness and the civic competence and also use language models in real-time clarification for the social sector people, they have already started developing that capacity. So instead of people in their part time contributing to fact-checking, they now collaboratively tune a language model that can add this clarification and context in real-time. So without that immune system, then yes, we expect that tactic will be used. But if we prepare ourselves toward that immune system, then maybe the attackers will conclude that it’s not worth the cost.
-
Okay. So on the sort of disinformation and social media side of things, in the US we’ve had a very vigorous debate about TikTok and the way in which it could be used as a vector for pro-Beijing disinformation.
-
The Taiwanese government in recent months has taken certain steps to reduce or counter these TikTok outlets, in certain steps, to reduce or counter the use of TikTok…
-
For four years now.
-
Oh, for four years?
-
Yeah, it’s been four years since we–
-
But there were recent actions taken, right? To restrict use of government employees.
-
That was four years ago.
-
Oh, wow.
-
Yeah, we four years ago summarily banned the use of PRC software, hardware, and internet services in our public sector. So it’s not about TikTok or some other little red books or whatever, it’s a blanket ban on PRC services and branded software and hardware. What we did, as you mentioned, is just to reiterate that internet service is also a product, according to our Cybersecurity Act.
-
So some people were interpreting the original Cybersecurity Act-related guidelines as just hardware, or just hardware and software. But we are saying basically even if you rely on the service of an app that connects to a website or an API or anything like that, as long as it’s providing continuous service and is PRC-branded, then it is also banned from public sector use. So we clarify that, but the ban was already there.
-
The clarification was new, though?
-
The clarification was new.
-
Okay. Should Americans be worried about PRC apps and software?
-
As I mentioned, the focus this year for us is on cyber attacks by the foreign actor in the actor side and coordinated inauthentic manipulative behavior on the behavior side, so the A/B of ABCD. I worry less about the content. Like if you see a short clip that’s distributed in other platforms, even though it originated from a PRC app, that’s probably fine. So what we worry about is that a foreign actor can tune the algorithm so that inauthentic behavior gets into action in a very short time frame. And we classify that as a cyber attack, not as an editorial content level thing. That’s the main vector we’re worried about.
-
We’ve also noticed that in the US, the CFIUS has been talking about an arrangement that specifically addresses the actor/behavior level of changing algorithms and things like that, instead of on the content or distribution level. We think that it agrees with our assessment.
-
Sorry, what agrees with your assessment?
-
That the main threat factor comes from the foreign actors’ manipulative behavior, instead of addictive content and distribution.
-
Okay, I see. Great. So switching gears a bit, to get to what, something you talked about last year was the possibility of low-Earth orbit satellites being used by Taiwan in the event of some sort of crisis, whether it’s of a geopolitical nature or it’s a natural disaster.
-
Earthquakes.
-
Yeah, earthquakes. Exactly. So I guess that was very early into the process when you started talking about this in summer of 2022. Where do things stand now? I understand that there’s an agreement with the UK company.
-
OneWeb.
-
Okay, yeah. So maybe you can talk a little bit more about that and the status of that project.
-
Yeah, I think there’s much more visibility and urgency this year after earlier this year that the subsea cable is connecting Matsu Island and the Taiwan proper, the main island, there are two subsea cables. And within a week, two Paracel flag flying vessels, one fishing, one cargo, “accidentally” dropped anchor and kept moving. And so both subsea cables were cut. And then Matsu Island was without subsea cables and therefore without broadband internet.
-
Of course, we quickly kicked into action along with NCC to set up microwave stations and satellite capacity and so on, but it did put – it’s just like how last August cyber attacks during Nancy Pelosi’s visit put cyber attack and DDoS into everybody’s mind.
-
This year, the Matsu incident put the cutting of subsea cables from a hypothetical situation into an actual situation. And so we’ve doubled down on the investment in non-geostationary satellite systems. We now have capacity with SES in the Middle Earth orbit, and OneWeb, as I mentioned, in lower Earth orbit. Both are being tested by the TTC, the Telecom Technology Center, as we speak.
-
The hope is that we work with as much as possible – as many as possible – satellite vendors, so that by the end of next year, we’ll have more than 700 either mobile or fixed satellite receiving points, and each point, either a hotspot or a backhaul, connects hopefully to two or more satellite systems. So that it’s less likely that all of them will be disrupted or jammed or broken during an earthquake.
-
So, I mean, one of the major satellite providers that people have talked about in the context of conflicts these days or other crises is Starlink, run by Elon Musk. the Taiwanese foreign ministry criticized comments by Elon that seemed to repeat a Chinese talking point about Taiwan being an integral part of the PRC.
-
What is your assessment of the potential collaboration with Starlink? Has moda ruled that out? Where do you stand on that? Have you spoken to Starlink? Where do you stand on that from having spoken to Starlink?
-
I think there are arrangements where having an additional, not as a replacement, satellite provider can be helpful. And as long as basic cybersecurity standards are met, having a high availability, redundant capacity is by definition always a good thing. So we’re not, to your question, ruling anything out. On the other hand, we do see that overly reliant on one satellite provider, in particular the one that you mentioned, may not be the preference for many Taiwanese people and MPs.
-
So, we’re investing in a plurality of satellite providers, not just for redundancy’s sake, but also we want to work with many jurisdictions, many countries’ systems, so that it becomes, as I mentioned, very difficult to jam or disrupt all those different satellite systems belonging to different countries at once.
-
It is also for the same reason that we’re working with all three major cloud providers. That’s to say Google, Microsoft, and Amazon, not just backing up to outside of Taiwan, but also on local resilience, on setting up their local data centers, at least for the critical video communication lines, so that even when the subsea cables are cut and we have to rely on satellite, and even if the satellite are jammed, at least domestic to domestic conversations can still happen using the domestic data centers.
-
So the goal is 700?
-
Fixed or mobile satellite receiver sites.
-
Okay, so where are you guys now?
-
Yeah, we’re quite early into the 700. The thing about the 700 is that it talks about the sites, not about the vendors. So each site can connect to one or two or three or more satellite vendors, right? So I think we’re starting with remote islands first, because remote islands have the least capacity for redundancy for network. Some of them already rely only on geostationary satellite, and of course these are priority. Some of them have some microwave capability, but that’s not hard to disrupt, actually, in a dedicated “earthquake”. So we’re doing that as well.
-
In addition to those remote islands this year, I think OneWeb will cover most of Taiwan toward the end of the year, and hopefully all of Taiwan by the end of the year. And so that’s when we will start rolling out more testing sites for real.
-
Really? Wow. So it’ll cover all of Taiwan, probably?
-
Yes, that’s the message we hear from OneWeb.
-
Okay, that’s pretty significant progress. Okay. Great. And then I’m just curious if there have been any new disinformation narratives that you’ve seen over the past week that have come up with the massive flight of PLA jets into Taiwan’s air defense identification zone. Has that come with any new information? I mean, I know that there’s been a lot of talk about the PLA jets being sent to Taiwan. Has that come with any new –
-
Manipulation attacks?
-
Yeah.
-
So ADIZ is really the Ministry of Defense, right?
-
Of course, yeah.
-
And they have their Twitter account, so I encourage you to ask them that question.
-
But to your question, I think we’re mostly monitoring, like, foreign interference that’s coupled with cyberattacks. So since last August, these two individually quite distinct before last August, cyberattacks and information manipulation attacks become quite closely coordinated last August.
-
As I mentioned, because of generative AI, the costs of both are lowering, which makes further coordination even more likely. So I don’t think in this particular incident there’s this kind of coordination going on, but we’re monitoring quite closely.
-
Since they used Pelosi’s visit as a pretext?
-
There was also a significant amount of that, of cyberattacks, especially around denial of service, this March when Dr. Tsai Ing-wen visited the U.S. So there were two spikes. But so far this month, we haven’t witnessed something of that magnitude yet.
-
Yeah. Okay. And do you think that there will be a big spike leading up to the election?
-
Well, we’re always preparing, right? Wellington Koo of the National Security Council was quoted in the media saying that, just on average, there’s 5 million cyber attack attempts per day. So when I say spikes, it doesn’t mean that normally there’s no – it’s millions per day. That’s just the background level, right? So we’re always prepared.
-
Yeah, it’ll keep you busy. Great. All right. Yeah, I’m just trying to think of anything else that we need to cover here. I mean, I think we’ve covered a lot of things.
-
Do you have a broader message to an American audience about what Taiwan is doing to fight disinformation, and deal with the disruptive effects of AI? What are you telling people, leaders on the sidelines of the UN this week? What’s the big takeaway for them?
-
Yeah, I think one of the main messages is that Taiwan can contribute to a no-compromise solution to the threat that AI poses to democracy. It’s often phrased as a compromise between progress and safety. If you want more innovation, you give up some public safety. If you want safety, you slow down progress. It’s just like during the pandemic times, right? If you care about public health, you disrupt the economy. If you care about economy, there’s some health cost, right? It’s like a dial.
-
The Taiwan model, if you call it that, is that through co-creation and participation with the civil society, with people closer to the pain, we can find solutions that doesn’t make sacrifices and that takes care of both progress and safety through participation. And all of the methods that I just described to you don’t actually require any rollback on the fundamental freedom of expression or fundamental freedom of association and things like that.
-
So, we’re not buying into the narrative that only top-down, lockdown, shutdown, takedowns are going to save online digital life against the threat of information manipulation and generative AI. We’re basically saying we can actually co-create real-time responses that safeguards and even advances the bandwidth of democracy.
-
Yeah, I think that’s it for me. We’ve covered a lot of ground here. I really appreciate your taking the time, Minister, during a very busy week.
-
Sure. Thank you.
-
Thanks.