What do you think about China’s “DeepSeek”? What are its problems?
DeepSeek, as many know, is an open model—like a LEGO brick that anyone can use to build their own towers. When DeepSeek first appeared, there was much publicity claiming it cost only $6 million to build such a tall tower, but that’s not accurate. They just added the final brick, and now others are building on their foundation. For example, Perplexity took R1, DeepSeek’s reasoning model, and created R1-1776 (1776 being the year of American independence). They took R1 but through further training removed many censorship elements, such as the inability to discuss Tiananmen Square. After removing these elements, they released this new brick, and many others have continued developing and building upon it.
Sharing so others can continue development is inherently good, as I’ve mentioned in previous answers. But authoritarian regimes worry about people asking certain questions, like about Tiananmen Square. So when operating within an authoritarian regime, you’ll notice that although DeepSeek is prepared to answer, its response is suddenly canceled halfway through. This problem exists in both DeepSeek’s app and website.
Its contribution of this building block is like any scientific research or development contribution, but if you operate within its borders, our Digital Development Ministry has long stated that relying on such services is like relying on TikTok—your confidential information and privacy might be over-collected or exploited for other purposes. So it’s safer to use services like R1-1776, or the Open R1 that Hugging Face is currently retraining.
Between the United States and China, there is now opposition, division, and decoupling in both economics and technology, with tensions increasing. Will the Open Source efforts you mentioned be hindered by this opposition? Or will the political confrontation between the US and China hinder AI safety?
Even during the most intense period of the Cold War, the United States shared technology with the Soviet Union on how to safely store nuclear fuel. Because any accidental or intentional criminal act causing a major radioactive event would be bad for the entire world, not just for one country. So “global security” becomes an opportunity for both sides to share knowledge.
We now see that since the AI Summit in Paris, countries like the UK have changed their “AI Safety Institute” (referring to product safety, like wearing seatbelts) to “AI Security Institute,” viewing AI from a security (national and information security) perspective rather than just a safety perspective. As the scope of AI-caused damage and harm expands, people are viewing AI from this new “national security” perspective.
In this process, open source plays a crucial role. In the cyber security world, new encryption systems or security systems aren’t typically developed behind closed doors but published at the draft stage. Would this make attacks easier? The information security community has reached a consensus over the past thirty years: “The enemy already knows the system.” So you only need to protect your keys—everything else can be public, which is actually safer. If you try to protect both your keys and all your code and deployment blueprints, requiring attackers to know nothing, you’ll be breached immediately. Through open source cybersecurity and resilience—not avoiding attacks, but because your blueprints are public and friends who help defend you also know your blueprints—they can help you fix issues and block attackers’ paths quickly.
This approach requires being as open as possible about your defenses. With openness comes interoperability—it’s very important for Taiwan, Japan, and like-minded friends to train together. Open source is a prerequisite for all these aspects; open source actually enhances security. This is a 21st-century concept. In the last century, even encryption systems were considered military technology that couldn’t be exported from the US. But after thirty years, the entire security community understands that the more open my encryption system is, the earlier vulnerabilities are discovered, the better and more secure it becomes. AI security will move in this direction too.
Of course, competition remains, but the “security” aspect can be shared and won’t create obstacles. Previously, to cause information security damage to another party, you had to be a major power. Now, even a criminal gang of three or four people can cause disproportionate harm, like when someone used planes to crash into tall buildings. In this new situation, you’re not just defending against other major powers but various small criminal groups. These small criminal groups can now operate through fully automated means like ransomware and online scams, without even needing remote control. The money they gain from crimes can be reinvested in computing power, creating continuous iteration. We can imagine this situation becoming more complex in the future, making it even more necessary for countries to share knowledge about “how AI can defend.”
Additionally, in educational settings, some worry that teachers might become unnecessary since AI can handle many tasks like language translation, legal research, etc. How do you think AI should be used in educational settings?
This challenge already emerged when search engines and Wikipedia appeared. Previously, teachers had the most professional information in their heads, and students listened to them. But even without AI, once encyclopedias and search engines became available, students would interrupt teachers saying, “Wikipedia doesn’t say that” or “The internet says something different.” In other words, if it’s purely about knowledge transmission with standard answers, teachers lost their monopoly long ago.
In Taiwan, we changed our basic education teaching guidelines in 2019. Previously, the focus was on “standard answers”—understanding and reproducing them. But from 2019, we shifted because these standard answers are something AI can handle better than both students and teachers.
What we want to cultivate in the learning process isn’t standard answers but “how to spark your curiosity”—this is self-initiation; “how to collaborate with people from different backgrounds”—this is interaction; and “how to see win-win possibilities in collaboration rather than just ‘I win, you lose’”—this is common good.
Self-initiation, interaction, and common good (summarized as “self-moving good” in Mandarin) represent the value humans can still create after AI handles all the standard-answer tasks. This is what we’ve been discussing: the meaning generated between people through exchange. This meaning is built on mutual understanding and care. I think this isn’t unfamiliar to Japan, which also believes a person’s success isn’t just about perfect test scores or earning the most money, but maintaining bonds with society, responding to social needs, and bringing overall value. In this respect, Taiwan and Japan are completely aligned.
So our education doesn’t need to worry about AI. If we ask students to develop “self-initiation, interaction, and common good,” teachers become facilitators who help students interact and spark creativity, rather than repositories of all standard answers—that model is long gone.
After implementing our new education approach in 2019, the first batch of middle school students entered this system by 2022 and participated in the ICCS (International Civic and Citizenship Education Study) assessment. Taiwan’s students ranked first globally in civic literacy and confidence in their ability to contribute to environmental sustainability, social issues, and human rights. Reassuringly, our OECD PISA rankings (in mathematics and science) remain among the top, showing we haven’t sacrificed STEM performance while emphasizing civic literacy. I think this is the best outcome.