So, in that sense, I think that AI safety, is still the broader bucket. And I think we probably should still continue seeing it in that lens. And of course, misalignment is definitely a big component to it as well, considering the potential for catastrophic risks that it entails. But yeah, I would say that I personally feel more comfortable in having that broader, AI safety umbrella as something we should move towards and see various risks contextualized.

Keyboard shortcuts

j previous speech k next speech