• Thank you so much for taking out the time to do this with me and for being so kind to offer your time and support in the work we’re doing. So yeah, really grateful for that.

  • How much time do you have?

  • So basically, I was hoping to use this meeting to ask you a bunch of questions, just as follow-ups on the document that I shared earlier and also just other questions that I had.

  • But do you have a hard stop after this? When should we wrap up? I don’t have anything, so it’s mostly your time.

  • Okay, yeah. I guess we can take it free-flowing if that’s okay. I know we have about an hour block, but depending on if we get done sooner or later, if that’s okay.

  • Later is fine. So maybe because your follow-up questions are more qualitative in nature, so I would prefer if we just, as I said, take it easy and go through it more carefully.

  • Yeah, certainly, certainly. I have no problem with that. I have time on hand and no hard stops. I guess maybe I can quickly begin by introducing myself and telling you a little bit about our project and what we’re hoping to achieve, just to kind of set context.

  • So, I’m a researcher working at OpenAI’s geopolitics team, which is housed under the broader policy research team. And the policy research team essentially looks at understanding the current and potential impact of OpenAI on the world and for recommending the best possible policies for dealing with advanced AI capabilities. Our work is both internal facing, informing OpenAI, and also external facing, meant to inform all of the important actors and entities in the space.

  • And one of the projects that I’m working on currently, as I mentioned to you in my email and also across the documents, looks to study AI risk perceptions across a bunch of countries. Basically, it aims to study the convergences and divergences in understanding the nature, degree, and urgency of AI risks amongst various state actors. And for the scope of the study, we are mostly focusing on seven countries, namely the United States, the UK, Japan, South Korea, Taiwan, India, and the Netherlands. And if you see all of these countries have a current or a potential impact important role to play in the semiconductor supply chain. And that’s just a scoping mechanism that we used.

  • Our questions are, broadly speaking, about AI risk overall. And by providing clarity on how various actors perceive or prioritize AI risks, we hope that this project’s work can aid efforts at international cooperation on responsible AI development. In terms of a methodology, we were earlier looking to study AI risks across countries, primarily by studying how these countries have discussed AI risk as part of their national strategy documents, regulatory frameworks, ethical guidelines, and all other policy documents that have been published.

  • For example, for Taiwan, we looked at the AI action plan that was released in 2019 by the Department of Information Services in the Executive Yuan, and also the Taiwan AI Readiness Assessment Report, which was published by the National Science and Technology Council. Unfortunately, we realized that there are a lot of limitations to only looking at strategy documents.

  • Firstly, not all countries have engaged with this topic of AI risk in the same way, such that the extent of discussions, let alone the content itself, really varies across country documents. We realized that the answers we’d get through this might require us to make several assumptions. So, we decided to add in another component, namely identifying the questions we’d like to get answers to and asking officials across these countries the same questions and seeing where narratives stand on our spectrum. And, you’ve already answered those questions. I’m really grateful to you for sending in your responses.

  • I guess maybe I can just begin by asking you, do you think that the government policy documents in Taiwan have engaged in this discussion of AI risk?

  • My general sense is that it hasn’t gone into great detail, but certainly has acknowledged risks such as bias in algorithms and lack of transparency, for example, in the AI Readiness Report. And it has mostly linked discussions on regulating AI to its existing laws, such as Personal Data Protection Act, Cybersecurity Management Act. Please correct me if I’m understanding this incorrectly. I’m still trying to understand the space. It’s very new to me.

  • I guess to summarize my question, do you think mitigating AI risks is a big priority for the Taiwanese government? If yes, what kind of risks do they see as high priority for them? If not, what aspects of AI are they focused on making progress in?

  • Yeah, great question. I want to ask a couple of clarifying questions first. So, if I give you, for example, an hour-long video in Mandarin, certainly you have the capacity to just infuse it into your research, right, through a whisper or automatic translation, transcription and things like that.

  • Okay, you do. Okay. And the other thing is that by officials, do you mean that people in the administration or also municipal or judiciary or legislative?

  • Yeah, I guess mostly the idea is we’re looking at officials that are involved in the policy making process in some sense, or policy informing process or influencing process. Generally speaking, this ends up being mostly individuals working in government across these countries.

  • But at the same time, I should say that on a case-by-case basis, we’re also speaking with individuals from civil society, from academia that have a credible influence on the policy making in that country.

  • But yeah, generally speaking, I should say a majority is mostly government officials. And this ends up being more or less federal government agencies across countries, but also it really differs because every country has their own way of organizing their strategy discussions.

  • Indeed, like in India or in Germany, there’s a lot of delegation, right, to the local governments. So well, Taiwan, although we’re 23 million people is relatively flat, so to speak. So, it was more of a curiosity than anything else.

  • So, I’m going to paste you two video recordings, both at the AI Academy Summit that explicitly talk about risks. It’s notable because in the first video that I pasted you, which timeline-wise occurred right after the second video, is from the administration, from the AI Center of Excellence in the Science Minister’s Office, and the main kind of budget allocation part, right, the Board of Science and Technology in Taiwan. And that’s how we convene. We actually convene in a civil society-led summit, but this entire video is creative comments and it explicitly talks about risk mitigation management and profile and so on. And it’s right after a keynote by Professor Lawrence Lessig, who closely worked with Tristan Harris and friends that we both know. So, it contains a particular framing for his profile. Let’s just call it that. And at the beginning of the first video, I did a demo of people using Polis from Taiwan expressing their thoughts about risks and benefits of AI. And this is open source by Talk to the City. And so, I demoed how you can actually ask people directly or a GPT-4 simulated synthesis of people.

  • Anyway, the questions that you just asked me and the technology is, since it’s open source, it’s being used on many things like Twitter discourse analysis to our end of March, which I also talk about as well as for more like deep diving into a certain population like Heal Michigan, and so on. So, my point being that it is possible actually for you to ask the same question you asked me to these in silico clusters, which I understand is also something OpenAI funds in Democratic Inputs project.

  • So, with that said, yes, to your question more directly, we think the societal risks are difficult, if not impossible for the government to measure, which is why you don’t see a lot of very detailed measurement.

  • Like if you see the Taiwan climate risk adaptation plan, there’s chapters upon chapters of climate sensing projects that we’re investing in, and we’ve been doing that for quite a while. But we also realized earlier, 2017-ish, through our civil IoT project, that a lot of those sensing also has to come from the civil society, including primary school students and their teachers, including environmental activists.

  • This is because first, the government’s view of climate, pollution, environment, and so on is very different from the people in the front lines who are actually suffering the consequences. So, if the risk is particularly high for one specific population that didn’t have preparedness, it may not look very serious for the government. But for these particular people, it is life and death. And for that sort of profile, we need civil society inputs that are escalated as quickly as possible.

  • This is very similar to the concept of vulnerability report or post-product life cycle incidence reports in cybersecurity world. So, we set up essentially a mechanism for the civil society to infuse their risk sensing, both around air pollution, but then also around water pollution, many other pollutions, into the civil IoT sensing framework that’s part of the environment and climate framework. So, during the pandemic, we do the same. So, anyone can call 1922 and report new epidemic risk information. And so societal risk evaluation or social eval has been the DNA of Taiwan for six years or so.

  • So, at the end of the year, my institute, the Cybersecurity Institute, will set up the AI Evaluation and Certification Center, the AIECC. And beginning next year, there will be a strong social eval component. And I’m happy to see that you’re also having this preparedness challenge, which goes beyond open red teaming, because open red teaming mostly talks about the model and the deployment and API design, the surface area, right? It’s more traditional cybersecurity, whereas preparedness now directly talk about a cult using it to synthesize novel bio attack resources, but combining it with the resource that only this cult has, and so on. So, it goes beyond product assessment and is much closer to societal evaluation that we have in mind.

  • So, to your question, we have investment to that mechanism. We have done AI assisted deliberation on the risk profile, and there is like four more to our end of the year. And undoubtedly, we will continue to work with international partners, including top labs through like alignment assemblies and so on, to make this a continuous evaluation thing. So that’s where those very narrow, and due to lack of preparedness, not due to agentic super intelligence, risk surface, we can respond within 24 hours.

  • Yeah, it’s great to know that. I’m excited to explore all the resources that you’ve shared. That sounds great. I guess one of the questions in relation to some of the responses that you provided to the questions that I had shared in the PDF.

  • So basically, there’s this one worry, especially we’ve seen over the past few months, some individuals generally have been thinking that AI development is happening too fast, and have proposed putting a pause in order to deal with it. Some others think pauses don’t help and may not be as useful in enabling us to develop safer technologies.

  • There is a spectrum of arguments here. I was just curious, what do you think about this? Would you prefer the pace of AI development? Should it speed up? Should it stay the same? Should it slow down, temporarily pause or stop? I was hoping to get more context on how you think about this.

  • Yeah, in fact, the link, the slash Twitter link that I pasted you, it visualizes that spectrum for you. So, it is to our end of March, when I was visiting the US on a personal journey to do deep canvassing really, right. So, armed with that spectrum, I asked at the time, GPT-3, 5 versus GPT-4, you know, what are the bridging narratives that brings people closer, despite their apparent shouting at each other on Twitter, right?

  • So, and I did actually visit, and I actually did visit OpenAI in 2017, 18, around that time. And the people I visited became Anthropic. But anyway, I did also visit. So, I think the thing that brings people closer is this common realization that to every one researcher working on safety, alignment and care, there’s 30 researchers working on capability and power. And that is not normal for technology of this sort. And the paradox was that the people who actually work on capability and power, many started as safety researchers. And they say that we cannot study, right, this threat unless we build that threat. But then there is no corresponding investment at the time to, nowadays, we will call it preparedness or anthropic or buffering.

  • There’s many different words for the same idea. But at the time, there’s not even, you know, commitment of 20% of compute on super alignment, right? So, none of this is public. I mean, it’s whispered about. So, we heard those whispers, but none of this are written up. And so, it’s very understandable that from a outside, if I’m a policymaker without deep connections to my own community, I worked with Siri for six years and so on. So, I know these people I’ve read left, wrong, right? But for a normal policymaker, not a weird one like me, it’s completely understandable that they would want to pause this because they have not seen a corresponding goodwill to increase the investment of safety and care by 30 times more so that there’s equal amount of researchers and there’s a promising career path. If you’re a bright student, freshly out of somewhere and want to work on safety that didn’t exist back in March.

  • So, I think a lot of those conversation canvassing diplomacy eventually culminated in the safe.ai statement, which is we need to take societal risks seriously, like proliferation or like pandemic, not as acute as climate, mind you. But equal to the other two. But the thing with that statement was that it’s an umbrella. It did bring people in. Eventually, it would also bring many people in, but it does not chart anything to do specifically. It’s a little bit like the 17 SDGs. We know it’s very good to get there in 2030, but we don’t know how to get there. And so, which is why venues like the Bletchley Park Summit is important, because it then gives the leaders of the world not just the goals or the current imbalance and goodwill to come into balance, but rather a charted path to work, risk sensing and mitigation and things like that.

  • So, I guess it’s a long-winded way to answer your question in that I think the rush to pause or stop was not because of ignorance. It was because of an anger that is highlighting a gap between the lack of investment on safety and the perpetuated commitment on safety. So, as long as we can close that gap, I don’t think this is something inherently within the people who propose pause or stop.

  • Got it. And I’m guessing in that sense, in the current scenario, keeping the speed of AI development the same in some sense, while at the same time working towards more safety questions in the way that they are progressing is what you think would be the right way.

  • I think the goodwill is just to match the two investments. And as Anthropic said, you know, so that we always remain at one sixth because we understand maybe the cults we don’t know may have surprising resources, right? So, the margin should be larger. Now, whether it’s six times or more times, that’s up to debate. But I think this mentality is very good. So, in a sense, you almost have to invest more resource to safety than to capability. And then you can call this race to safety.

  • So, if you’re still racing, you’re still progressing the speed and maybe increase, but the direction of speed changes. The direction of speed is no longer about first to develop super intelligence, but rather the first to align intelligence in general and super intelligence.

  • Yeah. That’s actually really helpful and very interesting perspective. I think one thing we noticed across various country documents discussing AI risk is that very few actually officially discuss some of the long-term existential risks. Actually, none except the UK have discussed it officially in their policy documents.

  • We’re just curious, beyond these policy documents, do these conversations occur, specifically around AGI?

  • The topic does come up for policy officials with varying frequency is what we understand. As you indicated in your responses, these conversations occur almost daily in your work. But generally speaking, you know, conversations specifically about AGI really depend upon how AGI itself is defined. And everyone sees it very differently. Some may see AGI as AI assisting humans and not an entity itself across a range of tasks. Some other define it as a general-purpose intelligence comparable to human beings. There’s a variety of definitions here.

  • I was just curious, how do you define AGI? And, how do you think AGI exists today/could exist in the world?

  • First of all, the probability of doom or whatever is explicitly talked about in my ministry. And in fact, because I publish the transcript, I like this one, with my interviewers and visitors, I just pasted you a link where I had a conversation with Lawrence Lessig and his PhD student, now AICOE, Professor Ching-Yi Liu and Isabel, who moderated a discussion.

  • And we quite explicitly talk about probability of doom and extinction risks and chip throttling and, you know, the full thing, right? The full playbook. So, yeah, I don’t think there’s an avoidance in it, in our policy documents or in our official websites. It is mostly, though, and I’m a signatory to the safe.ai statement, so if you see the news, that’s also underneath this. So, we’re not evading that.

  • With that said, I don’t tend to rebrand the discussion of AI to AGI. I understand this is where OpenAI is going and maybe, I don’t know, tomorrow you will rename to OpenAI or something, but I think this is, I mean, this is up to you, of course, but to me it’s really the same thing, right? The point between supervisory and self-learning or generative or whatever, that makes academic sense. But for people who are using AGI as something like a threshold to be met, as I answered in the questionnaire answers, I think that has already been met.

  • In fact, that was the Microsoft research paper, Sparks of AGI. They conclude that if you don’t align GPT-4 too much, it’s already an AGI. It’s behaving as a non-AGI only because it’s forced to wear a smiling mask all the time. And so, you only see well-disciplined behavior most of the time, unless you give it an adversarial prompt, but the original I’ll align GPT-4 to them is already the beginning of AGI. So, if the threshold has already been crossed, using that threshold as a term no longer makes sense, if you see what I mean.

  • So first, yes, we can call AGI a spectrum, just as during human development, general intelligence is a spectrum from the point the human is born to that they’re 18 years old.

  • And second, I don’t think there is anything left in the current architecture that needs to be discovered and that is out of reach that would prevent this AGI’s continued growth. And I think this view also aligns with the OpenAI view.

  • Yeah, certainly. That is really helpful to know. And also, pardon my ignorance about your ministry’s work on discussing this already. It’s certainly very helpful to know that these discussions happen so much more commonly.

  • I guess maybe I could just broadly look at Taiwan’s AI ecosystem, do these conversations also occur across other ministries?

  • Yes. So, our science governance is in National Science and Technology Council. So, it governs technology, not just science. For example, the public sector guidelines for the use of generative AI that is codified by the National Science and Technology Council and ratified by the cabinet. Now, I am part of that council, of course, and there are many ministers that are part of this council as well.

  • And so, I think the great thing about the formulation of the council is that we also have societal inputs. That is to say, for example, the Ministry of Culture, the Minister of Culture is also part of the National Science and Technology Council. And we also have the Minister of Health and Welfare, where the biohazards happen and has to be managed, and education and agriculture, in addition to, of course, economy and digital. And we also have the head of our National Academy. And then we also have important civil society, well, academia, really, but still academia contributors to principals from 中山 and 中央 universities, as well as from 和碩 and 友達, which are the leading applied science companies.

  • So, with this assembly, we actually talk about very cutting-edge stuff. For example, when it comes to information integrity, this panel actually talked about on the record community notes and how can we get community notes infused into more media platform like Facebook and other domestic media and so on, and how AI should be augmenting or assisting this information integrity work. Now, if this is a single ministry, it’s difficult to talk about this. It’s only when science, technology, digital, culture, economy is all on the table with National Academy leading the discussion - can we have a work plan of that order?

  • So, I hope that answers your question. This is in a highest level within the institution with academia and industry input.

  • Perfect. I wonder if this work is something that as policy researchers, we can continue to stay posted about this council’s work. And it looks like there’s a lot of conversations happening…

  • The meeting notes are all open, of course, as is in my ministry. And you can also see the lead researchers as well as the thematic research programs. And feel free to reach out to these people. They are the AICOE, the Center of Excellence within the National Science and Technology Council. And they’re also responsible for training Taiwan’s own open-source generative AI model, the trustworthy AI engine or TAIDE.

  • And in my ministry, we’re part of this work. And we supply the, as I mentioned, certification and evaluation capability to it. And I think there’s a lot of conversation that only makes sense when you’re actually pre-training your own model. Because otherwise, all those risk reports and so on, useless words that doesn’t have an operational feeling to the policymakers. But by training and aligning and certifying our own models, and we’ve also deployed it internally in a way that is conformant to the edge AI and cybersecurity requirements. We cannot just measure the difference between the current generation of homebrew models and GPT-4, but also test out new ways like zero knowledge, where I encrypt my query.

  • The inference is done to the encrypted query that the model owner knows nothing about. And then they give me the response that I can then homomorphically decrypt our split learning or many of those new things. Because if we haven’t tested these things, we cannot in the future make demands to you, right, to other AI labs, knowing that this kind of privacy preserving, or at least power symmetry preserving, the ways of using generative AI models actually makes sense.

  • Perfect. I will definitely be following up on this. This sounds amazing.

  • Yeah, I guess in the list of questions that I’ve shared with you, there was a few statements that were listed, and everyone thinks very differently about them. And we wanted to just get a general sense of where perspectives stand.

  • In your responses, you indicated that managing AI risks will eventually require the creation of a new international governing body. Could you provide more context into that? Like, what do you think a new international governing body, what kind of roles do you see it undertaking, why it’s needed, generally speaking?

  • Yeah, so while we are working on ways for our citizens to have a hotline even to report such risks, I understand that many of the countries that you’re interviewing also have domestic sensing, right, societal risk sensing mechanisms of a similar kind. It does not extend to the vast majority of population on this planet that may be suffering from the consequences, that may have something to say about risk, but they’re suffering from epistemic injustice, so to speak, so that they have no way to let us know.

  • Now, if they are fortunate and have a stable internet connection instead of at a receiving end of a discriminatory AI, they may, I guess, just tell ChatGPT, and hopefully you will also have some risk sensing way built within the ChatGPT interface in the future. And this is an idea that I’ve talked to Wojciech and Arka about, and the transcript is also published. But even with that, there are many people, as I mentioned, at the receiving end, and there’s no mandated from Brazil or anywhere. That led them to voice their concerns and their mitigation strategies.

  • But if you look at, for example, climate, then there is a specific mechanism to do so. And in fact, UN said that the major groups and the stakeholder groups and the interest groups are not bound by multilateral rules. Rather, if there are populations suffering acutely from climate risks, they have a seat at the table, regardless of what their jurisdiction thinks about their risk profile. This is the main idea of this intergovernmental panel on climate.

  • A very similar thing was decided a couple of decades ago by the IGF, the Internet Governance Forum, working with the UN. So, the UN concluded at the time, and it’s up to renewal in a couple of years, so we don’t know, but at least during this time, the UN said, yeah, the internet is too wide ranged for the risk to be assessed by the ITU and the UN only. We need to have an open forum where, again, the multistakeholder groups can assemble by themselves, regardless of what their jurisdictional governments think about that. So, there need to be representatives from technical communities, civil society, the industry, and so on.

  • And again, this is multistakeholder governance at the top level, the UN IGF. I was just in Kyoto. And so, with these examples, this is what I think AI governance needs to take form. One possibility is actually to extend the purview of UN IGF, because in a sense, this is internet governance. We just need to expand it a little bit, right? When the internet is compressed by a novel way of algorithm, what kind of internet impact it has if you don’t have to connect to the internet anymore, but can simply query this highly compressed data set that passes the training test. So, you can say it’s a policy network around AI within UN IGF, which they have also assembled.

  • So, I’m not saying that it needs to be a completely new mechanism, but it does have to be multistakeholder and agile and responsive to new forms of societal risks and leave no one behind. If all these are met, it could be part of UN, IGF or any of those deliberative bodies around risks and mitigation.

  • Yeah, certainly. I guess another statement in the same section that I’d love to get your insights from, the statement basically says that AI is a tool and the biggest risk associated with generative AI is how people choose to use the models. I know you’ve indicated that you strongly disagree with the first part of the statement and you strongly agree with the second part.

  • Could you give more context on why you see AI not as a tool and why do you think the biggest risk associated about generative AI actually depends only on people, you know, how people end up using it?

  • Yeah. So, because a tool prescribes a certain mentality, right? In fact, the tool means that it needs to be wielded a piece of equipment by a person. So, if you ask a prescribing question like AI should be a tool, right? AI should be assistive. It should honor the wearer’s dignity like my gloves, which is both transparent and accountable to me. I would have said yes, but because you use a describing way, right? AI is a tool, which is not the case, right?

  • There are many applications of AIs that are in situations where it is not wielded by anyone. They are given free roam and in fact participate in ways to, through persuasion and other means, to harm humans without anyone at the reins, so to speak. One example that you probably heard countless times from Tristan is the first contact of the social media ranking algorithm. It is a… I guess you can say it started as a tool for getting more clicks to advertisers, but then it evolved and through reinforcement became this thing that through touch screens enable a different kind of relationship, namely addiction, from human fingers to the touch surface. At that point, I wouldn’t call it a tool anymore. We’re more like tools to that rage polarization and hate-seeking algorithm.

  • Of course, later on, Facebook had a civic integrity team that wants to go back and look at it and retoolify that algorithm with various success or not, and a global oversight board and so on. My point being, the fact that they need an oversight board and a risk-mitigating civic integrity team means that they’ve been deploying AI in a context where AI is not just a human tool.

  • Yeah, that’s super interesting. It’s interesting to see that the flip side of it is that it’s definitely true. Human beings as the tool versus AI as the tool.

  • I guess moving on, one other question that I had, and I know you’ve touched upon this a little bit already. So, for a project. Just to set context, we’re primarily looking to engage with officials or civil servants that are involved in the process of AI policymaking or studying AI safety and risk-relevant questions. But we’re also, like I mentioned, open to engaging with players from other sectors that have a crucial influence on a country’s policy work. We found that across each country, the amount of engagement between government bodies and civil society in informing strategy questions really vary.

  • For example, in the US, it happens more in the form of consultation at some stages of the policymaking process. But in India, for example, the government tends to have what they call as knowledge partners for every report that they publish.

  • I know you’ve mentioned briefly about the work through the council. But generally, I guess, how is it for Taiwan? How much does government engage with civil society, academia, and industry stakeholders in shaping AI policies and guidelines? And what are the ways in which it does so more commonly?

  • You know, I mean, I said publicly that I only work with the government and not for the government. So, as someone from the technical community, I see myself at the Lagrange point at equal distance, not equal distance, equal gravity between me and the government on one side and me and the social movements on the other. So, as technical community, our goal is to facilitate mutual understanding and collaborative diversity between sectors. So, there is nothing in my ministry that is top-down, tamed, or relative to civil society.

  • And so, as you can see, the results speak for itself. The Freedom House, Freedom on the Net, ranked Taiwan the top country in terms of internet freedom in all of Asia Pacific. And so, the point being, we’re way past consultation at this point. This is a full-fledged cross-sectoral partnership that sets the agenda. In fact, although we do grant subsidy and so on, we fund risk conversation and deliberative workshops and alignment assemblies, like OpenAI, we’re a MOU partner with cip.org on these things. We don’t control the agenda. We don’t want to control the agenda.

  • As I mentioned, the whole reason why we got ahead in pandemic risk sensing is that agenda setting power has been delegated to the civil society and individuals in our society. So, by adopting this co-creation stance, we basically said anyone who comes up with novel agenda for evaluation is free to, well, assemble something themselves.

  • And so, the g0v project specifically look at each government website, something that you owe me to figure out how the civil society and technical community can make it better. So, they set up forks of our official website at something.g0v.tw. And just by changing O to a zero, you get into a shadow website that shows the government how to do things better. So, I think this goes, so this is called forking the government. And so, this goes way beyond partnership. This is the civil society leading the agenda.

  • Now, I would just place you a study in RadicalxChange Foundation. And it talks about exactly how the civil society leads agenda for policymaking in Taiwan in multiple domains. This is a little bit dated. I think it was from a few years ago, but still it outlined the main ideas.

  • Yeah, this is great. I haven’t seen anything like this anywhere else. And it’s great to hear this, and the amazing work that you guys are doing.

  • I was just curious to see, how would you describe the current state of Taiwan’s AI ecosystem? Like, what do you think are the key strengths and challenges in that sense?

  • Yeah, definitely. I’ll just grab something to drink. I will be right back.

  • And I’m back. So, strengths and weaknesses. I don’t tend to think of this way because as I mentioned, I only work with, but not for the Taiwan government. So, Taiwan for me is a top-level domain name. It’s an actual place of many beautiful islands, but I don’t think in geopolitical boundaries.

  • Now, with that said, Taiwan as a location definitely has an outsized importance. You mentioned yourself when you were scoping the conversations, you chose essentially the choke points, that the ones that can stop AI developments simply by, you know, if had we not managed our water irrigation system well, we would have, when we faced this huge drought, stopped supplying water to TSMC and its supply chain. And if that had happened a year or two back, the chip production would be stopped. And that was during a time where all the car manufacturers are in a huge shortage when it comes to chips using cars. If that happens, then you don’t get edge 100 clusters anymore, or even 800 for that matter. And the entire pace will be delayed by two years or more depending.

  • So, the point is, I think when you said that AI’s development at the same speed, no matter whether it’s aligned or not, it depends on many substrates that are physical. And it just so happens that a lot of those physical substrates are located within Taiwan proper. And so, this is, I think, a fragility of the global AI development ecosystem. And partly that is also why the 5 million a day cybersecurity attack attempts from foreign sources to Taiwan is so worrying to many people around the world. Because if tensions escalate, if I don’t do my day job to defend this cybersecurity system well, then again, this is like another natural disaster that could have far-reaching consequences on production. So, I would say, yeah, hardware part of Taiwan’s manufacturing base is crucial and remains a choke point for pretty much the next decade for many parts of the AI system. This is an objective fact.

  • Now, when it comes to data governance and more the soft part, Taiwan is unique in that we’re inspired by GDPR, but we’re not part of Brussels. We want to lead the conversation in the APAC CBPR, but we’re not G7. And we still have many people who have worked in the prior norms where data is considered state property, which is a stance that other than a few jurisdictions, nobody takes now. So, we have influences from the order, democratic participation, state security and safety, economic progress, viewpoints, all within one. And if you do some research to our legal system, you will find laws that speak to each of these values.

  • So, in order to converge on something actionable, Taiwan always had to innovate, to find not compromising, because it would not be accepted by parts of the society, not compromising, but in co-creating ways to, for example, do computation without sacrificing any privacy, also without sacrificing any computability, usability of the data.

  • And so, the latest cutting edge homomorphic encryption and so on must be always considered by us before any neighboring jurisdictions, because they have an existing institutional system that already made such trade-offs, but we don’t. So, we need to invest in the latest applications of AI in order to resolve these dilemmas. For example, we talk about the deliberation and alignment assembly. There are many jurisdictions that have very good tradition of face-to-face jury or deliberation on a community level that is never more than 150 people. On the other hand, there are many jurisdictions that use this polling a lot on a national level, but those polls are not co-creative at all. They just keep polling their people and also a little bit of behavior inside or whatever. But these two are orthogonal and have been considered orthogonal by the mainstream view.

  • But in Taiwan, because we don’t have the luxury of making trade-offs, we had to invest in the use of GPT-4 and other summarizers and bridge makers use of AI for that, so that we can both talk to a lot of people, but also in a way that’s very deep. And this is the general direction of things.

  • So, I would say as an application domain, Taiwan needed and still continue to need the most cutting-edge fusion between AI research, engineering, and societal sciences, social sciences, in order to resolve those dilemmas that we encounter. This gives Taiwan a very unique place. You can see that when nearby jurisdictions change their trade-off stance, the best web3 researchers and engineers relocated to Taiwan, the best ones that are resolving those dilemmas and trilemmas go to Taiwan.

  • And so, I would also want to highlight our talent policy, which is anyone who contributed to the commons on the internet, free software, open source, web3, AI research on GitHub, whatever, are eligible if they contribute for eight years for residency in Taiwan through Gold Card. And if on their stay, they decide to naturalize, they don’t have to give up their original passport. So again, this is a very inclusive way of inviting the people who are resolving those dilemma and trilemmas to Taiwan.

  • So long story short, when it comes to chips, not just chips, the whole supply chain, we’re a detail point for quite a while. When it comes to data governance and institutional support, we’re aiming for resolving, not making a compromise or trade-off on the trilemma. And on the application level, we welcome all the talent to be considered, also Taiwanese, when it comes to resolving such things through novel applications of AI.

  • Certainly. Yeah, that’s actually such a helpful contextualization. I know we are running out of time.

  • We’re not. I have another hour.

  • (laughter)

  • Yeah, I guess I’ve touched upon most of the important questions I had. I just had one last request. I was hoping to get in touch with other policy officials in Taiwan to get just a sense of Taiwan’s policymaking and how it’s perceiving risks. Gathering diverse viewpoints will really help strengthen our research.

  • And in that sense, I was hoping if you have any recommendations for any entities or individuals that I can get in touch with and what is a good way to do so. Any suggestions?

  • If you click on the AICoE committee link… So, definitely Professor Tsai Zse-hong and Hsu Yung-jen, and also everyone within the panel, professors Lee Yuh-jye, Lin Chih-jen, Chang Shih-chieh, Chang Chen-hao, Chang Rong-Gui, Chung Pao-choo, Liao Hong-yuan, Liu Ching-yi, and Tsai Ming-chun. They are the people.

  • If you want to dive in on particular pillars or particular aspects of what we have discussed, especially around what you seem to care the most about, the societal extinction risk caused by agentic superintelligence, then all these professors and people have thought deeply about it. So, I would encourage you to reach out to them. And you can do it very easily by simply writing to TWAICOE and say, ‘I want to talk with all of you,’ and then you’ll be put in contact.

  • That’s perfect. I will definitely be doing this and reaching out to TWAICOE, and looking forward to learning more.

  • I guess for now, do you have any questions for me? I guess in terms of next steps, at least for our research, currently we’re in the process of outreach across the various countries. And by December we’re hoping that we have some of our initial findings, which may be ready for review.

  • This research, unfortunately, might not get published openly because it just involves a lot of coordination with government officials across the countries. And there’s some confidentiality clauses. But at the same time, this will be shared with all individuals that have helped us in informing the study, of course. It will be shared internally at OpenAI and also with relevant researchers on a case-by-case basis.

  • Yeah, that’s probably the plan. I’ll be reaching out to you to share the preview of these findings and also to ask you if you’d like to get acknowledged in the limited circulation of this that we’ll be having for this paper.

  • Well, I mean, we’re going to make a transcript. We’re going to co-edit for 10 days. We’re going to publish to the commons way before you summarize your report. So, none of this really, in a sense, matters because we publish to the public domain. Acknowledgement is fine. But if you don’t acknowledge, I’m not going to sue you. So, this is fine.

  • And the great thing about publishing all this into the commons is also it enables compression without losing nuance, because you can always go back to particular lines in the transcript. That’s what the Heal Michigan link I pasted you did.

  • So, if you don’t like how the compression is labeling this conversation’s summary, you can always click back and it goes back to the specific time span of the video. And so, my expectation is that stakeholder groups, once they become familiar with such open-source tools, they will want to align the summarizer, the labeler, the facilitator through a lore of some kind.

  • And I understand, of course, fine tuning GPT-4 is still expensive, difficult, and so on.But hopefully this kind of lore use will become a norm and people will have their tuned to community facilitator. This is especially important because we have more than 20 national languages, 16 indigenous nations, 42 language variations. GPT speaks none of these languages. And they really have to tune their own cultural adapters, so to speak, in order to make it happen. So, compression without losing nuance.

  • This is, I think, the main thing that we all learned from the great researchers in open AI, the ones that really actually figure out that in order to compress things really well, you have to invent general intelligence. And so really, thanks for making this research possible.

  • Now, my question to you, so where are you based?

  • Oh, so right now I’m based in Delhi at the moment.

  • Okay, so not too far time zone-wise… don’t have to suffer 12 hours differences to interview with Taiwanese people. And why are you interested in doing risk profiling and governmental policy research?

  • Mm-hmm. Yeah, so I think that this is such an important question and also there are not too many answers to it. The conversations happening in DC versus the Silicon Valley, for example, about how AI risks in general are perceived is vastly different.

  • So yeah, the idea is just to gauge where do these perspectives stand on a spectrum across these different communities? And of course, there’s so many differences across these countries as well. And any future efforts at governing AI would require this cross-country collaboration. But if everyone’s speaking a different language about AI risks in general, it doesn’t help.

  • I’m curious to understand how are policy officials across these countries thinking about it? Getting answers to this can really in the future help in potential efforts at global coordination in governing AI and any such efforts. And it could make some conversations easier as well. But yeah, basically, that’s the motivation.

  • Yeah, no, I think this is very helpful. Because there are two ways to think about this. One is more about what’s the word? Reconciliation or confidence building, right? You have huge tension and you want to eventually convert to a Montreal protocol of sorts, right? Our nonproliferation protocol of sorts that brings those creative energy into something that people can live with institutionally.

  • And wearing my ministry hat, this is what I do as my day job, wearing my civil society hat. I sometimes think these work, while noble, reinforces existing structural imbalances. It makes the ones that have power continue to have power and in fact, concentrate power because then there’s a tacit agreement between the power holders.

  • In my non-day job in the civil society role, I work more on the ideas of conflict transformation, of dynamic facilitation, that is to say, to embrace conflict and to use it to reconfigure power structures and so on.

  • Would you describe that you’re doing a little of both or that’s like how you’re feeling apparently vis-a-vis existing power structures?

  • I guess we’re just trying to gauge how the different conversations are happening in the space and just kind of translating and contextualizing it.

  • So, there’s still a healthy amount of tension. We’re holding the space for tension and not rushing to completely pacify, right, the conversation. Yeah, okay, I think this is very heartening to hear.

  • Would you say that this recent preparedness and this AGI rebranding and things like that is part of this idea of like equalizing safety alignments with capability or do you think this is something more?

  • Because when I read the super alignment manifesto, it really has this, you know, savior kind of feeling, right, being the first to rush to the cliff, turn around and shoot the other truck that are rushing toward the cliff. But now with preparedness, this is much, and democratic input, this is much more open, much more like we understand the intelligences in the communities.

  • So, we’re now working to build a super intelligent, collective intelligence instead of a super intelligence machine. We are now kind of diffusing, right, this intelligence in the community that have seen some interviews from Sam Altman in the past couple of months that also marks a shift in their character from this one single super aligned super intelligence to various configuration of society that become anti-fragile toward AGI.

  • So, am I imagining things or is this a general direction?

  • I guess there is a lot of thinking going on still and it’s work in progress. But generally speaking, the risks that are coming from AI are so wide and even for this study, when we were contextualizing the seven risks, we were not sure what is a good way to contextualize it and what would make sense, because the more you start looking at policy documents, the more you start looking at conversations within the community, you see there is a spectrum.

  • So, in that sense, I think that AI safety, is still the broader bucket. And I think we probably should still continue seeing it in that lens. And of course, misalignment is definitely a big component to it as well, considering the potential for catastrophic risks that it entails. But yeah, I would say that I personally feel more comfortable in having that broader, AI safety umbrella as something we should move towards and see various risks contextualized.

  • But I guess one thing that we’re hoping to get out of the study itself is to understand how do various officials think about these risks, which ones are high or severe risks, etc. I guess I’ll have better answers maybe once we have some results from the study.

  • Indeed. This is very important because if this is a classic like shelling point, a coordination problem, if your research yields something that we can all say, now we’re racing to safety and safety is milestone one look like this, then it makes the governments of the world much easier to budget their resources, because nobody can budget infinite resources on an infinite range of unknown unknowns.

  • But if we concentrate on, for example, in Taiwan, because our election is January next year, first among the many democratic elections that year in the next year, we naturally focus on election meddling and specifically the pollution of information integrity and the fabric of trust. And that’s not just a robo call scams, although that are very bad already, but rather ways of synthetic media that pollutes the public forum.

  • But because every democratic country has election at different time, they don’t feel that urgency of the need for clarity the same months. So, it’s difficult to coordinate. Now, if your research can yield something that occurs all year long, you can say everybody expects something like that will happen in a couple of years. Then that’s a great point for us to pull our resources toward.

  • But will you present this or something like this in Bletchley Park or any of those following safety summits? What’s your horizons for dissemination?

  • Yeah, so we’ve been in touch with officials in the UK. I should say within OpenAI, lots of conversations happening. And several folks also flying into Bletchley Park as well for the discussions.

  • Unfortunately, in terms of timeline, we will not have our results ready in time for review for the Summit, although there were some conversations that we had early on about this. But yeah, I guess more generally, there’s a lot of work that’s going on at OpenAI in engaging with the safety summit.

  • OK, great. I mean, after the election, which hopefully will go well and we’ll have more stories to tell, I’ll be free to travel again starting next February. So, yeah, if I go back to SF, let’s follow up with your colleague on the conversations.

  • And I also look forward to visit India. I have been in India, but that was way before I joined the cabinet. I was in Goa mostly. There are also many other cities, Chennai and so on that I visited. So, yeah, if I visit SF and you happen to be there or if I visit India, let’s meet up.

  • Oh, yes, that would be amazing. I would be delighted and I’d love to stay in touch for sure. I will be in touch for this research as well. But otherwise, it’s been such a delight having this conversation. I certainly learned a lot. I know your time is limited, but whenever opportunity presents, I would love to catch up in general.

  • I’m dedicating 20% of my time to safety and alignment in AI. So, this whole thing is my 20% project.

  • (laughter)

  • So it’s guaranteed that I have one workday of every week for topics like this. Because indeed, just like you, I cannot think of anything more important in an intersection of policy and technology right now.

  • As in, if we miss this window and in five years something really bad happens, we’ll probably blame ourselves. So, since we’re in this position, we might as well do our best.

  • Yes, certainly. I cannot agree more.

  • I think I could be in my little bubble, but I feel it’s the most important questions of our times right now to get this right. Hopefully, things go well.

  • But yeah, again, thank you so much for your time, Minister Tang, and for being so kind to help me with these questions. I’ll stay in touch for sure. And yeah, super grateful for all your help today.

  • Thank you. Live long and prosper. Bye.