Yes, there is a technical distinction. An agent using utilitarian logic aims to maximize a score without being too particular about the instrumental methods used to achieve it. However, as the standard AI risk discourse notes, the best way to maximize a score often involves power-seeking and recursive self-improvement, leading to extinction risks. This is why people introduce rules (deontic constraints). But the problem is that if AI systems are fast enough, they can outthink us and find perverse instantiations of those rules—following the letter, but not the spirit.

Keyboard shortcuts

j previous speech k next speech