This assumes that if the system today already rewards outcomes that are bad for humans, AI systems may be more effectively following these incentives locally. Humans, wanting short-term profit, delegate important judgments to the AI. And when millions of people do so to a small degree, it all adds up to outcomes that are bad for humans?

Keyboard shortcuts

j previous speech k next speech