Audrey Tang

For example, instead of a flattering slave that only makes one single human happy, how about an AI facilitator that makes a large group of people all slightly unhappy, but nobody very unhappy about a common outcome? That sort of facilitation, what I call horizontal alignment, is between actors, not to any specific actor. This kind of cooperative AI is a very different construction.

鍵盤快捷鍵Keyboard shortcuts

j 下一段next speechk 上一段previous speech