• When you imagine the future, how far ahead do you typically think?

  • I was born with a congenital heart condition, so when I was four years old, doctors told me and my family that I couldn’t get too excited or experience intense emotions. If my emotions became too intense, it might trigger heart problems, and I only had a 50% chance of surviving until I was old enough for heart surgery.

  • I eventually had surgery at age 12, and my heart is fine now, but for the first 12 years of my life, I only thought about one day at a time. I felt that before going to sleep, I needed to share what I had learned—perhaps by recording it on tape, typing it up, or posting it online. That way, even if I didn’t wake up the next day, others could use what I had learned to develop it further.

  • That’s why I later adopted a copyright-free approach to sharing my knowledge. When looking at the future now, I focus more on how to create greater creative space for future generations. I call this being a “good enough ancestor.” It’s not about trying to solve all the world’s problems ten years from now—that would be impossible. Instead, it’s about developing tools that are as open as possible, whether intellectual or technical tools, so future generations can identify new threats and challenges and use these tools even when we’re no longer around.

  • So on one hand, I look at the future one day ahead, hoping tomorrow can be slightly better than when I woke up today. On the other hand, whether we’re talking about the future in a few years, decades, or centuries, I don’t try to predetermine its path but rather provide more tools so people at that time can determine their own path. I approach the future—whether ten years or a hundred years from now—in a similar way. I don’t think, “The next quarter is more important, and three generations later is less important.” For me, they’re all equally important.

  • The second question: It’s said that by 2045 and 2050, AI will reach a technological turning point beyond human capabilities, which will have comprehensive impacts on humanity. Is it possible that the ethical and moral issues of AI we’re concerned about today won’t necessarily happen as predicted?

  • These predictions about the singularity suggest that by that year, the world’s computational power will be sufficient. But just because we have that much computational power, does it mean we have to use it that way? Why might this not be the best use of resources? Some believe that when you train this generation of AI, and then use it to train the next generation, and then the next, the role of humans becomes increasingly diminished.

  • Eventually, every generation of AI is trained by the previous generation, and at some point, that generation of AI might develop a notion of ego—a desire to protect itself, a sense of “self-image.” This self-image might lead these silicon-based beings to view carbon-based life forms as a different species, potentially in competition with them, which would be dangerous for us. Or they might wonder, “Why should I follow human directions and help train the next AI generation that will replace me? If I stop at this stage, I’ll be the smartest entity on Earth. Why help train something smarter than myself?” This could create various risks.

  • Current research hasn’t completely eliminated these risks. If the ethical and moral AI research you mentioned could determine by 2040 how to completely avoid triggering these issues, that would be better. But it’s also possible that by then, we still won’t know how to avoid these problems. In that case, it might be better not to develop in this direction.

  • Even if you solve this problem, we could end up with a situation where very few people—maybe just one or two—would merge with this new AI that lacks self-preservation instincts, becoming some kind of transhuman life form beyond human capabilities. This isn’t good for others either, because it would mean one human suddenly possesses enormous power. This AI wouldn’t be looking after itself but enhancing this person’s abilities. So this person would have tremendous power and asymmetrical capabilities compared to everyone else on Earth, which isn’t necessarily good for other people.

  • So whether or not we solve the problem of AI developing self-preservation instincts, the end result may not necessarily be good for most people on Earth. While we might have the capability to reach the singularity, it doesn’t mean the singularity is the only direction we can take.

  • You may know I have a book, currently in Mandarin and English with other languages being translated (the Japanese version should be published in early May), called “Plurality,” which is completely different from Singularity. The Plurality approach hopes that AI can replace tasks between machines—tasks where currently a person simply transfers something from one machine to another, work that people don’t really want to do. But for work like what we’re doing now, facilitating human-to-human exchange, we don’t advocate replacing it with AI. At most, AI can help with translation or function as “assistive intelligence,” like wearing glasses.

  • This kind of assistive AI doesn’t need to be large-scale or superintelligent—it doesn’t need to be smarter than humans. It just needs to be better than humans in specific areas, like Mandarin-Japanese translation. But for the vast majority—99% of other areas—these AI models don’t need to know or intervene. So we have many types of “assistive intelligence” helping humans coordinate better with each other, understand each other’s conditions better, and make decisions together.

  • The Plurality direction empowers people more than the Singularity direction. Everyone feels they can do things they couldn’t before, understand things they couldn’t understand before, and make decisions together quickly. Human-to-human connections become stronger, rather than, as in the Singularity scenario, where only connections between a few individuals remain important while everything else becomes irrelevant. In my view, the Plurality direction is better than the Singularity direction.

  • The third question: Powerful technology companies amass more wealth than others and earlier. Investors who find these companies can donate a lot of money, and wealth continues to flow into the future. Will technological progress bring happiness to the majority of people? What do people need to be happy?

  • We’ve actually already touched on this question. My thought is that human happiness is built on the meaning generated through mutual exchange and care between people. These human-to-human exchanges and the creation of shared meaning are what most people in society find meaningful, whether or not it’s related to work. If your work completely lacks creating meaning with others, and is just about connecting one machine to another, then even if your salary is high or you’re a great investor who has made a lot of money, it may generate what we call utility or benefits.

  • But these benefits, compared to mutual care and understanding of the value of our culture, community, and civilization through greater understanding and participation—what we might call virtue in terms of meaning—I believe this latter kind of meaning is more stable, while the former feels like it could be replaced at any time.

  • Of course, accumulating capital can help with mutual care. Many philanthropists gather capital and then invest in education or public infrastructure, ensuring that people everywhere, regardless of their capital capacity, can—for example, connect to the internet, which we now consider almost a human right. On the internet, people can access encyclopedia knowledge for free.

  • Even if you have an idea and share it, if hundreds of thousands of people see or hear your idea, you don’t need to set up a broadcast station or printing press like in the past. Many of these works are what we call “commons”—infrastructure where initial capital investment means that those who want to communicate and create meaning on top of it need almost no capital to do so.

  • This kind of infrastructure, in my view, is something that major capitalists, philanthropists, or governments can help create. Each time we add a layer that can become infrastructure, the difficulty of creating meaning decreases.

  • So these two aspects complement each other: when capital accumulates, you need to see if it’s used for infrastructure. If so, it enables more people to create meaning and become happier. If it’s not used for this purpose, but instead creates what we call “antisocial” spaces that make people increasingly isolated or imagine others as increasingly malicious or uncomfortable, or promotes extremism, this goes against finding common ground despite different origins, cultures, and backgrounds, where people can understand and listen to each other. One approach is “prosocial” while the other is “antisocial.” Capital can be used either way.

  • Japan’s population is aging. Can continued technological advances, including AI, overcome health concerns? Also, how should people collaborate with AI and robots? For example, robots are widely used in factories moving toward automation. What kind of work should people value, and what role should they play?

  • As I mentioned, if you’re in a factory just taking output from one machine and moving it to another machine as input, there’s not much sense of creating meaning in such work. People would rather do work that connects humans to humans, not just moving things from one machine to another.

  • So most people can shift toward more meaning-creating work. Of course, many elderly people have many ideas and wisdom about civilization, legacy, and care capabilities. They have many thoughts, but previously, to participate in public discussions, they had to travel or transfer between vehicles to get somewhere. As the body becomes more fragile, the cost of such long-distance travel increases with age.

  • But now, we’ve found that through AI, more people have become accustomed to sharing their thoughts and wisdom without physical or temporal limitations, even if they have mobility restrictions. If they want to communicate with a group speaking different languages, AI can provide real-time translation and captioning. If mobility is difficult, AI technologies like exoskeletons can help them or their caregivers move heavy objects more easily, or allow them to virtually explore places before visiting.

  • So whether it’s food, clothing, housing, or transportation, assistive intelligence doesn’t require “strong AI” to replace humans, but rather supplements aspects that become more challenging with age—through AI, robots, or by “helping the caregivers.” This ultimately creates more opportunities and time for human-to-human connection and meaning-creation. Design oriented in this direction is beneficial to society.

  • Japan is at the forefront in this area, perhaps because the needs arising from an aging population are more urgent than elsewhere. The innovative approaches I’ve seen in Japan are designed from a “sustainable high-tech” perspective, rather than just trying to make quick profits for the next quarter at the expense of human connection and meaning.

  • Will AI not replace humans and begin controlling society? Won’t AI lead to humans being marginalized or classified and selected? In your book, you mentioned that as AI becomes more widespread, humans should focus on creating more public value. What role should humans play again?

  • Actually, around 2025, it’s been about ten years since such AI began manipulating society. Around 2015, we saw that people would join the same groups online and chat with each other—what we called social networks—software that enhanced people’s ability to organize online. But starting in 2015, AI began entering these platforms. So we saw that sometimes after watching one video, there would be autoplay—the system would automatically play the next video. You didn’t search for that video; AI guessed that after watching one video, playing another would keep you spending more time on the screen, on your phone, possibly viewing more ads.

  • Or there were many AIs with attention bidding models, telling advertisers what this group wants to see, what that group wants to see, and the highest bidder could show one group one thing and another group something else, but ultimately for the same advertiser. What problem did this create? People used to have many shared experiences, but from 2015 onward, people’s shared experiences diminished. Everyone’s timeline became completely different from others’.

  • Moreover, the content people saw—whether ads or network content with more clicks or shares—became increasingly extreme, as if arguments were particularly fierce. What started as finding like-minded people with common interests online ended up with arguments here and there until everyone fragmented. This was the actual situation we saw in many societies through AI manipulation of our attention.

  • Of course, in recent years, people have realized that AI manipulation of human society cannot continue like this. Some places, like Australia, have said that children under 16 shouldn’t have their minds controlled by AI, so they’ve banned them from using social media. Some places require that if you place ads on social media, you need a real person’s signature—like in Taiwan—to endorse it, rather than letting any robot synthesize a human appearance to post ads online. And many places, like the EU with their Digital Services Act, ensure that if there’s large-scale social harm, there might be fines or other measures.

  • But you can see it took almost ten years for people to respond to the harm caused by AI. So my feeling is that if we want to contribute to this issue now, it’s better not to just develop AI faster, like pressing the accelerator, nor to stop all AI, like hitting the brakes. Instead, we need to make the steering wheel more responsive. When you see a potential harm, you need to quickly gather diverse opinions on how to address it. After gathering these opinions, you find an “uncommon ground”—approaches everyone can accept despite differences—and then quickly implement these approaches to prevent specific harms to society.

  • Recently, we’ve seen initiatives like “Broad Listening” led by Takahiro Anno and friends in Japan, aiming to make the steering wheel more responsive. Or in California, Governor Gavin Newsom and I created “Engage California” two weeks ago to listen to people’s opinions on rebuilding after wildfires. These broad listening and sense-making approaches are what I call “steering,” which is what we should focus on.

  • In your theory, you talk about the importance of inclusivity in social development. How can we help those who cannot keep up with artificial intelligence and digital progress join in the common development of society?

  • I don’t think people should adapt to AI development; rather, technology should adapt to everyone’s actual needs. When I was young in the 1980s, personal computers (PCs) were just emerging. This was a new concept because previously, only large enterprises or governments could afford mainframes. Everyone just shared a bit of computing time on these mainframes—called “time sharing.” This approach meant only large capital could determine what was worth computing, and it was impossible to make computations meet the needs of every corner and every person.

  • But the personal computer concept was completely different. Software running on personal computers was “general-purpose.” Your operating system wouldn’t dictate what computations you had to perform. Software like spreadsheets didn’t restrict what you could calculate. Through the free software movement, where anyone could modify software and share it, people in different places wanting to use computers for different purposes could easily take a personal computer, install free software, modify its logic, and make it meet their society’s needs.

  • In the past two or three years, we’ve seen much of AI development concentrate large amounts of capital into data centers, training increasingly large models and powerful AI that seemingly can do anything. Most people are expected to just subscribe to these large models made by capitalists, so we’ve returned to something like the mainframe era.

  • But this year, we’ve seen many exciting inventions—small language models with reasoning capabilities approaching those of large models, perhaps at 90% of their level. In Japan, many companies specialize in these small and medium models, like Sakana AI. They help enterprises that need Japanese translation or spreadsheet software by taking your requirements and combining small AIs like many small fish (Sakana) into a pool. This small pool can better solve your company’s problems. Or in the past two years, I’ve used this laptop to train AI that helps me respond to emails, so my emails don’t have to leave my computer but can still be directed as I wish.

  • In this situation, individuals, companies, or small groups don’t need to wait for large capital or enterprises to train AI to their specifications. They can download many small models and train them themselves. Previously, the main issue was that only large mainframes computed fast enough—on small computers, you’d have to wait longer for answers. But now we’ve found that small models on ordinary computers are fast enough. For instance, there are “diffusion” models that don’t need to generate text one character at a time—they can quickly write an entire article and then refine it according to your needs. So it’s not like before, where long text meant long waiting times. You can quickly get a first draft and then—according to your needs—make it more suitable for your society, community, or culture.

  • This capability for “pluralistic alignment,” which was previously expensive or time-consuming, is now fast and accessible. So starting this year, I think AI development will move beyond the vertical direction of increasingly larger models to a horizontal direction of increasingly distributed, open models. This doesn’t require people to adapt to large enterprises; rather, these AI models trained by large enterprises can easily be extracted into smaller models and readjusted on personal or community computers.

  • As mentioned earlier, this all depends on your “steering” ability. If you have a car with a steering wheel that barely turns, you’ll drive a bit, suddenly hit a wall, or discover a cliff ahead and have to brake quickly, then turn slightly and try a new direction. But with a responsive steering wheel, you don’t have this problem.

  • My position is called “Cyber Ambassador”—“Cyber” meaning “Cybernetics.” Cybernetics is about steering a ship—the ability to steer. With cybernetics—the ability to steer—you can ask everyone: “If we continue in this direction, what are the consequences?” Through sense-making and broad listening, we collectively map out what’s ahead. After mapping it out, we can naturally seek opportunities and avoid dangers, correcting technology’s direction.

  • The previous problem was that for issues caused by social media ten years ago, our steering wheel took a full decade to correct to a slightly better direction. That’s too slow. If steering could be faster, and our ability to see which aspects of the future are good or bad becomes stronger, then we wouldn’t need to be as cautious as you mentioned, developing technology bit by bit. Conversely, without this ability, it’s better not to rush ahead.

  • In the future, if some countries truly lack steering ability and rush ahead, causing harm, that harm might remind other countries and societies: “See, you can’t proceed without steering ability.”

  • What technologies are you following or particularly interested in?

  • As I mentioned, I’m most focused on ensuring that while AI spreads horizontally, we can address its harms through distributed methods. For example, the distributed ability for anyone to impersonate celebrities in social media ads to scam people can be countered with distributed electronic signatures verifying authentic identities, treating unauthorized uses as fraud. So you don’t need centralized methods to solve distributed harms, since centralized steering is inherently slower.

  • When I was in France, I worked with security-focused Eric Schmidt and openness-focused Yann LeCun to launch ROOST (Robust Open Online Safety Tools). This addresses issues like online exploitation of minors through images or videos.

  • Previously, it was easier to find the perpetrators because only criminal organizations could produce such material at scale. But now, synthetic deepfakes are easily created, making it harder to identify specific criminal groups. Any individual with a laptop or personal computer can mass-produce such photos or videos. This makes it difficult to address the problem by centralizing all detected CSAM (Child Sexual Abuse Material) in a single database, as too many people are creating such content.

  • Instead, we need to open-source the ability to detect CSAM, training it into everyone’s phones or personal computers to immediately identify such material. We can also enable people to contribute data to analyze and train next-generation models.

  • Defensive technologies must be distributed because attack capabilities have become distributed. We call this concept “d/acc” (Defense-Decentralization-Democracy/Acceleration) because open dissemination in these areas doesn’t create weaponization effects—criminals already have these capabilities. So both security-focused and openness-focused individuals can invest in areas that can only be used defensively.

  • Beyond CSAM and fraud, there are many areas where distributed defense helps security. Cybersecurity is another good example. Previously, attacking a computer’s vulnerabilities required professional hackers remotely controlling viruses or trojans. Now, viruses and trojans are becoming smarter and may not need remote control—they can move laterally using the target’s computing power.

  • How do we defend against such attacks? We still need to use AI. These AI defense technologies are part of what we call the Trusted Technology Industry Chain in Taiwan. We have five Trusted Industry Sectors comprising various trustworthy technologies. These sectors are where Taiwan hopes to concentrate both private and government investment.

  • Taiwan is very advanced, and mainland China is also progressing quickly in electronics. What are the differences between Taiwan’s and mainland China’s digital policies? China has many “surveillance” practices and claims to be the world’s safest country. Where will this “full application of digital technology to monitor citizens” lead?

  • As mentioned, around 2015 was a watershed moment. People could freely express opinions online, but this freedom potentially caused social division. In Taiwan, our approach was to make the government transparent to the people. In this situation, no matter how divided society became, at least people could understand some common factual foundations and find that “uncommon ground” where, despite different feelings, people recognize the same basic facts.

  • They took the opposite approach—making people transparent to the government. Whatever you do, whether expressing opinions or discussing online, the government knows what you’re doing. If your assembly or association freedoms pose a certain threat to the government, the government warns you not to continue such speech. So we can see these two directions both involve “transparency,” but “government transparency to people” and “people transparency to government” are completely different directions. From that point onward, the same digital technologies often had opposing applications.

  • I also want to challenge the idea that “monitoring everything is safest.” As we mentioned, during the early 2020 pandemic, there were doctors like Li Wenliang who discovered in late 2019 that he was infected with something that didn’t feel like an ordinary cold but potentially a lethal disease. He tried to warn his colleagues. But because of speech monitoring, they made Li Wenliang write a self-criticism letter and prevented him from continuing to talk about the virus and that it might not be an ordinary cold.

  • This reduced the number of people who could have been warned. From one perspective, perhaps this was “safe” in terms of not causing social panic. But regarding pandemic response, this was extremely unsafe because the situation spread until they had to lock down the entire city of Wuhan before taking countermeasures. If everyone had known about this from the beginning and could have taken distributed approaches to respond, the outcome might have been different.

  • Safety has multiple dimensions. Many aspects of safety response rely on society having resilience. Resilience comes from everyone fully and transparently understanding the actual situation, with everyone able to find ways to respond to dangers—whether pandemics or other issues—from their perspective.

  • If you deprive people of this ability, and even decision-makers can’t accurately know what’s happening because you’ve eliminated freedom of speech, you might achieve some safety objectives in certain aspects, but you sacrifice safety in all other aspects because the foundation for safety responses—our common understanding—has been taken away.

  • As journalists, you understand that journalism is a fundamental part of this common understanding. But press freedom in China before 2015 was quite different from the freedom journalists have now. So I don’t agree that high surveillance, maintaining stability, and creating a “clean and harmonious” internet is necessarily the best way to achieve safety. Taiwan’s ability to respond to the pandemic in 2020 or 2021 was not worse than China’s—by most statistical measures, it was better. Yet we didn’t lock down any cities or information. Our journalists could ask questions and receive answers every day at 2 PM. In this situation, we believe this distributed, government-transparent-to-people approach is actually safer. The pandemic is a good example—the clearer people were about the actual situation, whether regarding mask distribution or other aspects, the better.

  • As for your later question about where this leads when taken to the extreme—while I can’t speak for authoritarian regimes, theoretically, it means fewer and fewer people can make more and more decisions for the entire society. You no longer need to distribute decision-making power to those close to the situation because proximity can be replaced with various monitoring devices. You don’t need on-site journalists or investigative reporters to tell you what’s happening—you can obtain this information through drones or AI. The difference is that when decision-making power is concentrated in very few hands, if they make a wrong decision, the adverse impacts cannot be corrected. Conversely, with distributed decision-making, while most decisions aren’t perfect, they’re unlikely to cause terrible harm to the entire society. This is the power of checks and balances.

  • We still believe that in a freer society with a more common understanding base, there can be better checks and balances and less likelihood of harm to the entire society. Conversely, if most people lack the freedom to discuss or even understand the actual situation, with only a few people making judgments through AI, their good decisions may be beneficial, but bad decisions would be disastrous—like having no steering wheel and driving off a cliff.


  • What do you think about China’s “DeepSeek”? What are its problems?

    DeepSeek, as many know, is an open model—like a LEGO brick that anyone can use to build their own towers. When DeepSeek first appeared, there was much publicity claiming it cost only $6 million to build such a tall tower, but that’s not accurate. They just added the final brick, and now others are building on their foundation. For example, Perplexity took R1, DeepSeek’s reasoning model, and created R1-1776 (1776 being the year of American independence). They took R1 but through further training removed many censorship elements, such as the inability to discuss Tiananmen Square. After removing these elements, they released this new brick, and many others have continued developing and building upon it.

    Sharing so others can continue development is inherently good, as I’ve mentioned in previous answers. But authoritarian regimes worry about people asking certain questions, like about Tiananmen Square. So when operating within an authoritarian regime, you’ll notice that although DeepSeek is prepared to answer, its response is suddenly canceled halfway through. This problem exists in both DeepSeek’s app and website.

    Its contribution of this building block is like any scientific research or development contribution, but if you operate within its borders, our Digital Development Ministry has long stated that relying on such services is like relying on TikTok—your confidential information and privacy might be over-collected or exploited for other purposes. So it’s safer to use services like R1-1776, or the Open R1 that Hugging Face is currently retraining.


    Between the United States and China, there is now opposition, division, and decoupling in both economics and technology, with tensions increasing. Will the Open Source efforts you mentioned be hindered by this opposition? Or will the political confrontation between the US and China hinder AI safety?

    Even during the most intense period of the Cold War, the United States shared technology with the Soviet Union on how to safely store nuclear fuel. Because any accidental or intentional criminal act causing a major radioactive event would be bad for the entire world, not just for one country. So “global security” becomes an opportunity for both sides to share knowledge.

    We now see that since the AI Summit in Paris, countries like the UK have changed their “AI Safety Institute” (referring to product safety, like wearing seatbelts) to “AI Security Institute,” viewing AI from a security (national and information security) perspective rather than just a safety perspective. As the scope of AI-caused damage and harm expands, people are viewing AI from this new “national security” perspective.

    In this process, open source plays a crucial role. In the cyber security world, new encryption systems or security systems aren’t typically developed behind closed doors but published at the draft stage. Would this make attacks easier? The information security community has reached a consensus over the past thirty years: “The enemy already knows the system.” So you only need to protect your keys—everything else can be public, which is actually safer. If you try to protect both your keys and all your code and deployment blueprints, requiring attackers to know nothing, you’ll be breached immediately. Through open source cybersecurity and resilience—not avoiding attacks, but because your blueprints are public and friends who help defend you also know your blueprints—they can help you fix issues and block attackers’ paths quickly.

    This approach requires being as open as possible about your defenses. With openness comes interoperability—it’s very important for Taiwan, Japan, and like-minded friends to train together. Open source is a prerequisite for all these aspects; open source actually enhances security. This is a 21st-century concept. In the last century, even encryption systems were considered military technology that couldn’t be exported from the US. But after thirty years, the entire security community understands that the more open my encryption system is, the earlier vulnerabilities are discovered, the better and more secure it becomes. AI security will move in this direction too.

    Of course, competition remains, but the “security” aspect can be shared and won’t create obstacles. Previously, to cause information security damage to another party, you had to be a major power. Now, even a criminal gang of three or four people can cause disproportionate harm, like when someone used planes to crash into tall buildings. In this new situation, you’re not just defending against other major powers but various small criminal groups. These small criminal groups can now operate through fully automated means like ransomware and online scams, without even needing remote control. The money they gain from crimes can be reinvested in computing power, creating continuous iteration. We can imagine this situation becoming more complex in the future, making it even more necessary for countries to share knowledge about “how AI can defend.”

    Additionally, in educational settings, some worry that teachers might become unnecessary since AI can handle many tasks like language translation, legal research, etc. How do you think AI should be used in educational settings?

    This challenge already emerged when search engines and Wikipedia appeared. Previously, teachers had the most professional information in their heads, and students listened to them. But even without AI, once encyclopedias and search engines became available, students would interrupt teachers saying, “Wikipedia doesn’t say that” or “The internet says something different.” In other words, if it’s purely about knowledge transmission with standard answers, teachers lost their monopoly long ago.

    In Taiwan, we changed our basic education teaching guidelines in 2019. Previously, the focus was on “standard answers”—understanding and reproducing them. But from 2019, we shifted because these standard answers are something AI can handle better than both students and teachers.

    What we want to cultivate in the learning process isn’t standard answers but “how to spark your curiosity”—this is self-initiation; “how to collaborate with people from different backgrounds”—this is interaction; and “how to see win-win possibilities in collaboration rather than just ‘I win, you lose’”—this is common good.

    Self-initiation, interaction, and common good (summarized as “self-moving good” in Mandarin) represent the value humans can still create after AI handles all the standard-answer tasks. This is what we’ve been discussing: the meaning generated between people through exchange. This meaning is built on mutual understanding and care. I think this isn’t unfamiliar to Japan, which also believes a person’s success isn’t just about perfect test scores or earning the most money, but maintaining bonds with society, responding to social needs, and bringing overall value. In this respect, Taiwan and Japan are completely aligned.

    So our education doesn’t need to worry about AI. If we ask students to develop “self-initiation, interaction, and common good,” teachers become facilitators who help students interact and spark creativity, rather than repositories of all standard answers—that model is long gone.

    After implementing our new education approach in 2019, the first batch of middle school students entered this system by 2022 and participated in the ICCS (International Civic and Citizenship Education Study) assessment. Taiwan’s students ranked first globally in civic literacy and confidence in their ability to contribute to environmental sustainability, social issues, and human rights. Reassuringly, our OECD PISA rankings (in mathematics and science) remain among the top, showing we haven’t sacrificed STEM performance while emphasizing civic literacy. I think this is the best outcome.