So, and Anthropic had this idea that the safety margin needed to be like six times, because the people abusing those language models, maybe they are, you know, coach members or something, they may actually be more imaginative than you and I, and have access to resources we don’t know that existed. So, there need to be a safety margin. But if we have like six times more investments on safety and care and so on, and understand that even if the capability grows by six-fold, our safe margin is not met, then this is safe to release, basically.

Keyboard shortcuts

j previous speech k next speech