Because, at some point, these very large AI models may evolve a very different motivation than the one that we trained into it and, currently, the science isn’t good at all in removing all these incentives. The danger then is that, at some point, humanity still thinks we have control but these superintelligent agent actually already assumed control. For a very detailed exposition and a scenario, just consult ai-2027.com.

Keyboard shortcuts

j previous speech k next speech