And then the second pillar is that if the local community, any community, feel that the current path of AI somehow causes harm, causes what we call epistemic injustice, like writing off the knowledge and wisdom that they have. Maybe they’re indigenous nations, maybe they have ways that are different from the current mainstream AI’s training assumptions, maybe have different social norms. Then there needs to be a way, what we call Alignment Assemblies, for them to, like what we did around Uber, to come up with the rough consensus, the general direction where they expect AI to behave.

Keyboard shortcuts

j previous speech k next speech