That assumes the level of investment into alignment and care stays constant, right? And I think if you ask Yann LeCun or Zuck, their thought might be that just as GPT-2 and GPT-3 equipped a generation of alignment and explainability researchers that prepares them for GPT-4, so would a kind of staggered release prepare the safety and containment researchers.

Keyboard shortcuts

j previous speech k next speech