Recently, I attended the convening of International AI Safety Institutes in San Francisco to share how Taiwan addresses the harms caused by synthetic AI.

“Safety” has emerged as one of the global AI community’s paramount concerns. Over the past year, governments, industries, and academia worldwide have held summits in the UK and Korea to address AI safety. These discussions prompted leading AI companies like Google, Meta, OpenAI, and Microsoft to voluntarily establish risk thresholds, pledging to halt model development or deployment in extreme cases where risks become unmanageable.

While these tech giants have made self-regulatory commitments before, this time they’re investing unprecedented resources and showing willingness to cooperate with competitors. This shift comes as Nvidia chip-trained open-source model Nemotron has surpassed OpenAI’s GPT-4 in performance. Recent surveys indicate that over 40% of Fortune 500 CEOs now prefer open-source models.

Compared to closed models, open-source systems not only process tasks faster and enable more advanced applications but also offer significantly lower costs for updates and fine-tuning. Sakana AI, a company backed by NVIDIA, argues that combining smaller open-source models better addresses enterprise needs than training large models. If this hypothesis proves correct, it could significantly constrain big tech’s growth potential.

This perceived threat has pushed tech giants to engage more with the open-source community, creating opportunities for collaboration on AI safety initiatives. The major companies, having invested heavily in their new models, want to prevent criminals from easily circumventing safeguards to produce illegal content. This has led to partnerships with open source communities to maintain open-source trust and safety tools and rapidly update protection measures.

Similarly, the open-source community values collective action—rapidly updating databases to address the spread of illicit content, rather than leaving such decisions monopolized by a few large corporations with closed systems. Beyond crime prevention, both groups have begun collaborating on cultural and linguistic alignment.

ChatGPT users often notice that its word choices and sentence structures don’t match local usage patterns, yet lack simple, systematic ways to provide feedback. A methodology for collecting responses and allowing user annotations could help ChatGPT automatically align with cultural contexts. This represents a core value for open-source communities like C4AI and EleutherAI, offering lessons for closed systems.

The future dominance of open-source versus closed large language models remains uncertain. However, this situation has created an opportunity for major companies to address diverse cultural needs worldwide, potentially ensuring that when we use generative AI, minority cultures and local communities receive equal treatment.

Keyboard shortcuts

j previous speech k next speech