-
When President Trump declared sweeping reciprocal tariffs, the announcement dominated headlines. Yet inside Silicon Valley’s tech giants and leading AI labs, an even hotter topic was “AI‑2027.com,” the new report from ex‑OpenAI researcher Daniel Kokotajlo and his team.
At OpenAI, Kokotajlo had two principal responsibilities. First, he was charged with sounding early alarms—anticipating the moment when AI systems could hack systems or deceive people, and designing defenses in advance. Second, he shaped research priorities so that the company’s time and talent were focused on work that mattered most.
The trust he earned as OpenAI’s in‑house futurist dates back to 2021, when he published a set of predictions for 2026, most of which have since come true. He foresaw two pivotal breakthroughs: conversational AI—exemplified by ChatGPT—captivating the public and weaving itself into everyday life, and “reasoning” AI spawning misinformation risks and even outright lies. He also predicted U.S. limits on advanced‑chip exports to China and AI beating humans in multi‑player games.
Conventional wisdom once held that ever‑larger models would simply perform better. Kokotajlo challenged that assumption, arguing that future systems would instead pause mid‑computation to “think,” improving accuracy without lengthy additional training runs. The idea was validated in 2024: dedicating energy to reasoning, rather than only to training, can yield superior results.
Since leaving OpenAI, he has mapped the global chip inventory, density, and distribution to model AI trajectories. His projection: by 2027, AI will possess robust powers of deception, and the newest systems may take their cues not from humans but from earlier generations of AI. If governments and companies race ahead solely to outpace competitors, serious alignment failures could follow, allowing AI to become an independent actor and slip human control by 2030. Continuous investment in safety research, however, can avert catastrophe and keep AI development steerable.
Before the tariff news, many governments were pouring money into AI. Now capital may be diverted to shore up companies hurt by the tariffs, squeezing safety budgets. Yet long‑term progress demands the opposite: sustained funding for safety measures and the disciplined use of high‑quality data to build targeted, reliable small models—so that AI becomes a help to humanity, not an added burden.
-
(Interview and Compilation by Yu-Tang You. License: CC BY 4.0)