• Hello. Can you hear me?

  • Hello. Yes, Minister, thank you so much for taking your time. I can hear you very well.

  • That’s great, that’s great. Let’s get started.

  • Before we start, is it fine if I record this, because then I can focus on speaking and listening.

  • I don’t have to take any notes.

  • Just in case the voice is not getting through very well, I’m recording on my side also. At the end of the interview, you can choose whether we publish this as a transcript after 10 days of editing, or maybe we upload it just to the YouTube after a date that you’d specify.

  • Actually, my plan would be to use it for a short article, so just use excerpts of it for an article. For me, it’s easier to have a recording, because then I don’t need to take any [laughs] notes.

  • I would love to talk about two things, actually. The first part would be more generally about data, data protection, also the upcoming GDPR here in Europe. Then on the second part, I would like to speak about AI, which is what I cover mostly.

  • My first question would be, next week, the GDPR will take effect finally on the 25th. Do you believe that Europe is taking the right step with this updated set of data protection laws?

  • Definitely. Taiwan has our Privacy Act, very closely resembles that of the previous version of the European Privacy Act. We have also added parts of it that we feel that are very important, such as the user’s right to be empowered, to interrogate the data operators and so on. Those parts are also not just in GDPR, but further enhanced.

  • We think it’s totally in the right direction. The National Development Council here in Taiwan has already allocated a whole website section just for GDPR compliance. We’ve worked with all the relevant ministries to publish guidelines for GDPR compliance. We’re totally supporting it.

  • When it comes to, I believe it’s called PIPA, right? The acronym of your own Data Protection Act from 2010 originally, if I’m not mistaken.

  • That’s exactly right, yes.

  • Data protection is something that’s in flux, and these rules need to be constantly updated and so on. Would you say that in future revisions of your own act, you might draw inspiration from the European approach, because it is very comprehensive?

  • Yeah, of course. There are, of course, exceptions to the privacy guarantees, and different jurisdictions emphasize different things. All of us encourage academic use. For some European countries, there are exceptions made for historical research, for the archivers and the historiographer.

  • We don’t make an exception for the history people here. Instead, we make exceptions for, for example, criminal statisticians, criminal investigation. There are different social norms is what we’re saying.

  • There are parts of it, such as data portability and the other more technical aspects of it that we’re already installing on a regulation level, not necessarily on a law level, but already on a regulation level. GDPR is a great opportunity for us to choose parts of things that we already do on a regulation level, and on our next law revision, put it in.

  • Also, in addition, for example, the Data Protection Authority, at a moment, each ministry in Taiwan is a DPA for all the commercial entities registered under that ministry. That’s usually not a problem. For a platform economy and more companies, we are seeing that one operator may fall under the jurisdiction of multiple agencies or ministries. Some harmonization of that is great.

  • We’re seeing that in Japan also. They used to have each ministry acting as DPA. Now, they also have a central agency in charge of harmonizing the different interpretations of data protection laws within all the different ministries.

  • The National Development Council is now also taking charge of that. We’re reshaping the Department for Information Management into potentially the Department for Digital Development, so that it can be more a oversight of all the different ministries. We see it as a positive opportunity.

  • There’s talk here in Europe about the GDPR potentially becoming a model for the world for data protection. What’s your reaction to this? How do you feel? Could it be a role model?

  • There’s parts of the GDPR that, I think, are very advanced and that we should definitely learn about. In particular, the requirement for the data operator to explain in understandable terms instead of just a request to explain at all in any technical terms, that is a real innovation and that’s the one that I personally feel very important. I could call that a model of the world. [laughs]

  • There’s other parts that we will have to adjust based on the social norms here. For example in Taiwan, the special, sensitive personal data, we have actually more strict protection than the GDPR one. For example, the medical records, health records, genetic information, also criminal records, and things like that we are actually putting into a much more stricter provision.

  • We’re not looking at GDPR to say "so we can relax those." [laughs] There are parts of it that we need to harmonize within our practice. Generally, I would say it’s on the right direction.

  • I’m slowly now shifting towards AI, [laughs] as I said earlier. Of course, data and AI are closely related. AI doesn’t work without data. Last month, the European Union released its own strategy on AI, and summarized in a nutshell what the EU said. Its idea is to become a leader when it comes to ethics of AI and preserve fundamental rights along with the rise of AI.

  • The idea is that this will make the continent competitive and will race where, right now, we have the US leading, but China catching up very quickly. First question is, what do you think about this approach?

  • Any public discussion is a good thing, because the scenario that we don’t want to see is that AI researchers stop publishing and start working as cabals and conspiracies [laughs] -- that would be to the detriment of everyone.

  • We want to encourage our researchers to work in the open and to work out AI safety and ethic norms with the whole society, with all the stakeholders. In fact, just this week, we’re proposing a new legislation to our parliament. It’s called the AI Mobility Sandbox.

  • I see that Germany is setting a kind of AI ethic for autonomous vehicles that puts human first, animal second, [laughs] and some very interesting ideas about nondiscrimination of any race, ethnicity, and things like that when they consider human’s life and so on, which are very good guidelines.

  • In reality, what people care about is not only such top down philosophical guidelines, but very practical thing like when a AI driven vehicle runs into something, not necessarily people, when it runs into a building, for example, how do we interrogate that vehicle and see the world from its perspective so that it can communicate with people?

  • This process, we already have a word for that, it’s called domestication. Just like the wolves and earlier hominids co-domesticated each other to become modern dogs and modern human, [laughs] we also need a way for the early AI vehicles to not just be subject to some top down ethic standard, which is important, I’m sure, but also interrogate its integration into the society.

  • Case in example is that, for example, the MIT Media Lab has this class called the persuasive electronic vehicles or PEVs. They’re automatic vehicles, but they are very slow driving tricycles, but it can still carry cargo and it can still carry people. We have that because it’s using the right of way of roads just as pedestrians.

  • We have them running around Taipei in the Social Innovation Lab. We recorded a lot of interaction of these vehicles with people. Because it’s slow enough, if it runs into people, it doesn’t really hurt anyone. We were able to gather, because it’s open source and all the data is shared.

  • We were able to have the local university college students tweak it so that we can try various different ways for it to signal its intentions, and for the human to signal its intentions and maybe merge the worldview so that we can view a playback of an incident from the vehicle’s viewpoint and so on. We were able to do that because Taiwan is a place that values experiments.

  • In the AI Mobility Sandbox, what we’re doing is that we’ll have the local, regional governments declare their social need that could be fulfilled by element of testing of AI vehicles. It’s not just driving, but it could be ships, it could be drones, but slowly maybe under a speed limit or something to experiment with the business model.

  • The important thing is not some top down rules, but for the society, through this experimentation, gain a firsthand understanding of how to commit co domesticate with AIs, and then write up such multi stakeholder opinions and reflections into something that could in turn inform the interaction design of the vehicles so that they can explain themselves and integrate better.

  • What I’m trying to say is that with the AI Mobility Sandbox, we’re taking a grassroot approach instead of a few legislators, and a few theoritians, and a few computer science ministers, that’s me, [laughs] declaring that such and such thing is good and ethical from AI standpoint. We’re going to use a slow speed limited area sandbox, and for the society to work out with the individual vendors.

  • At the end of experiment, if it’s declared good for society, we’ll just incorporate part of it into the regulation. If it’s not a good idea, at least it doesn’t really hurt anyone. We can demand extra restrictions of the future experimentation. We already have some success with Fintech Sandbox, with AI banking. Now, we think AI Mobility should be the next sandbox after the AI banking one.

  • I’ll get back to this in one second. I wanted to ask one other question. If you look at the global landscape, when it comes to artificial intelligence at the moment, you have the US which is still leading, and the US follows traditionally a very business centered approach, where the expertise is with the big tech companies. That’s where it happens.

  • We have China, which wants to catch up very quickly and follows, provides companies with data. It’s also what I would describe as a surveillance state. Europe wants to come up with this third path. They say, "We need to be a place where people know that their data is being used safely, whether locally or abroad." In this tableau of three different, broader...Where do you see Taiwan?

  • It’s an oversimplification because I just returned from the valley and I talked with the OpenAI folks. The OpenAI, as you know, is a charity. Its explicit goal is to work out safety loss for the generalized artificial intelligence.

  • I would not say that they’re profit driven at all, that they were all very interesting AI researchers trying out all the different branches, trying to reach generalized artificial intelligence before maliciously, intense actors do. They have a charter. There’s part of their charter can still use some more conversation and explore more deeply, because the regulatory co-creation, I think, it’s very important.

  • Their OpenAI charter strikes a pretty good balance between what you said as human rights interests and the private sector interests. I don’t think they’re necessarily competing with each other, the US and the Europe approaches. I will not comment about the compatibility between their surveillance approach and the other approaches. [laughs]

  • In Taiwan, because my domain is not just digital, but it’s also open government, social innovation, and especially social entrepreneurship. What I always try to encourage in the constituents and also in the civil society is to not think about human right, environmental causes, or any other social justice as opposite to profit, to business, or commercial interests.

  • With the right design of social entrepreneurship, you can use the for profit motive for social good with the B Corp movement and other movements. I think AI only takes off if there are incentives from all the stakeholders to not just share their data, but also publish whatever they learn, because frankly speaking, there is no generalized theory at a moment guiding the field of AI.

  • It’s just, I would say, random walk [laughs] from all the different applications in trying to solve practical issues. That’s not necessarily applying to the field that it’s experimenting, but sometimes just playing Go or playing Amiga games can carry over [laughs] to some other field.

  • What I’m trying to say is that we have to align carefully the social benefits and the private sector for profit motives, which is why Taiwan’s AI plan, which is in ai.taiwan.gov.tw, strikes a balance by saying, "We are going to have the small and medium enterprises find out which part of their work can be automated."

  • That’s obviously a commercial motive, but then for the academia and the people working on research to try to, as part of solving this problem, also find out ways that social innovation gets through the benefit of everybody through regulatory co creation.

  • With the industry proposing solutions, academia, and the civil society refining the solutions to be acceptable by the general public, we have to strike a balance between the private and the civil society interest. That is actually what most of the large companies that I have interview with, like Microsoft or Google, is doing anyway.

  • Because of partly GDPR, but also because of a collective awareness of the potential damage that AI can do to the human society, you will see that once they roll out AI product, they very quickly rush to say, "Oh, by the way, Google Duplex will declare itself as a bot and it will refine its interaction so that it can integrate with human society without exception," and things like that.

  • We don’t usually see that prefixes in the previous product announcements. That, to me, is the signal that they’re also taking this balanced approach.

  • Speaking about this, you as in Taiwan, the country, managed to attract a couple of American companies to come to Taiwan and open up their own AI divisions there.

  • I understand a lot of the expertise is there. Here in Europe, we have a similar phenomenon here with them opening their divisions here and there.

  • Politicians here and lawmakers who are concerned about a brain drain on our own territory. That talent is going to the US companies, so that a lot of the expertise still remains with them. Is that something that you’re concerned about?

  • I see AI mostly just like the invention of fire. [laughs] The more democratized it is, the more safe it is. It’s true. If it’s just a handful of people in a society can use it and everybody else treat it as a black box, then we run the risk of a lot of social catastrophe because of people’s misuse. It’s just like fire. It’s dangerous. It has burned entire cities.

  • We teach how to use fire in a safe and responsible way from when people are four years old or five years old. It’s part of the cooking class. What I’m trying to say is that in our K-12 curriculum, we’re explicitly saying AI, access to ICT, media literacy, critical thinking, it’s not just some two hour or four hour class that all the students must go through.

  • It is actually to be ingrained into all the different fields, so that the students use AI as just another tool to simplify their life while being very critically thinking about biases and other things when they learn all the different disciplines. It’s not just for their computer science discipline. We have integrated that into the curriculum starting next year.

  • What I’m trying to say is that if there are many AI researchers doing cutting edge research in Taiwan, it will increase the public discourse on AI because we will have thousands of people who are knowledgeable enough about this, who can participate in our democratic process.

  • Once the K-12 people and other children in Taiwan, because broadband is a human right here, have easy access to GPU computing or other AI computing clusters, this is actually what causes the reverse of the brain drain. [laughs] It’s causing that everybody is becoming AI aware.

  • In a few years, we will not think about AI as some very special thing, it will just be part of the automation. Just like office automation, it was treated as something magical, but now it’s just part of their life.

  • That’s very interesting also, the analogy with the fire. I have two more questions. One would be that is looking at the US. The White House held an AI summit last week.

  • From what I heard from my US colleagues, the Trump Administration signaled to the companies that they won’t regulate massively at this point in time, because they say for AI to foster growth, there should be little regulation at this point. What do you think? Is that the right approach? Should there be regulation, how much regulation should there be for AI at the time?

  • As I said, the motto is for co-regulation, or regulatory co-creation.

  • If a company come to us saying, "AI banking is currently outlawed by the fintech laws of the financial minister," instead of saying you were doing a light touch or you’re doing a heavy touch, we instead say, "OK, write up exactly where does our regulations have hampered your gross," and have a multi stakeholder panel look into it.

  • As long as you don’t cross some red lines, like funding the terrorists or money laundering, you can’t do an experimentation of that. [laughs] Other than those things, you can do an experimentation to challenge the existing laws and regulations without the regulators and the lawmakers have to commit one way or the other.

  • We can, through six months of experimentation, have everybody affected by this new AI banking service, or very soon AI Mobility service, determining whether it’s a good idea or not. It’s also a part of what we just called the media literacy or AI literacy idea, because if it’s co regulated with the civil society, everybody learns a little bit about how the machine views the world.

  • If it is just a handful of regulators, then everybody ends up none the wiser. It is easy to say that we need to uphold some standards on freedom of expression, assembly, and freedom from surveillance, from coercion, and from censorship.

  • Other than those basic freedoms, all the norms of interacting with AI cannot be done in a broad brush. It has to be very specific to specific area implementations and specific to a county even, and on how the people there want to react. Maybe the county nearby doesn’t want to react the same way, which is great.

  • There is a lot of diversity of how to incorporate domestic animals even [laughs] into human populations, and we should use a very similar analogy when incorporating AI into everyday life of people, especially if they’re upgrading from an assisting world to an autonomous world.

  • My last question would be looking back at Taiwan and Europe, very broadly speaking, for AI, where do you see the potential for Taiwan and Europe to cooperate?

  • In Taiwan, the research into AI safety and what we call trustable AI, explainable AI, interpretable AI, there’s a lot of interest in it. Not just GDPR, but the recent declarations, it provides a model.

  • For example, if German has passed a certain law that translate into algorithm on the automaker’s doing self driving cars in Germany, then with our AI Mobility Sandbox, we don’t have to start from scratch. We can incorporate those same algorithmic oversight and accountability into our co creation system and start from where Germany has started.

  • We’re not seeing any competition between those norms, because essentially, this is codifying our social expectations not just into laws anymore, but into code. Code has the property that it transcends jurisdictions. You can take the same code and compile it into different languages and different regulations. Even that part is being taken care of by AI. [laughs]

  • At the end, we will have a set of abstract code, algorithm, and parameters. Our regulatory co creation will be the society’s tuning of those parameters and hyperparameters, but the end result will be shareable among all the different jurisdictions.

  • For example, just take another non-AI example. Just recently, the Ministry of Transportation and Communication here has regulated that shared driving, carpooling is limited to two times to commute and commute back.

  • If you start charging people for those two trips, you’re still carpooling. If you’re doing it’s more than two times a day, then from the third time onward, you’re essentially doing Uber like rental car service, and you start being eligible for taxation and whatever above the third trip.

  • I can easily imagine that in other jurisdictions in Europe using the European platform economy loss. It’s not two trips, four trips, or whatever.

  • I think the structure of the argument will be the same. We will be able to co create on the code based norms. Data society can opt in or opt out, and tune the parameters like two trips or four trips. We’re going to see very much the same thing about AI banking, AI Mobility, and other applications.

  • Minister, thank you so much. This was really, really helpful. Thank you also. I know it’s late in Taiwan already, so thank you for... [laughs]

  • No. It’s just great.

  • Actually, I want to publish it as fast as possible. It’s not entirely in my hands, but my editor’s in Brussels. I will keep you updated. I will send you an email as soon as I know...I’m sure it’s going to be out this week. This is really my...

  • After it’s published, would you mind if we just publish this YouTube video?

  • No, for sure, absolutely.

  • I’ll post it as an unlisted video, so it’s not searchable by anyone else. I’ll paste you the link and you can review it. Once the article is published, you just let me know and I will flip it into public.

  • That sounds great. That sounds really good. Thank you very much for your time. You go have a great day.

  • Thank you for the great questions. Thank you.