Audrey Tang

Not by itself. If a system can change its own beliefs without a legible account of why, opacity becomes more dangerous, not less. The risk is no longer only error, but unaccountable drift. If the modification is in principle impossible to explain — because there is no reasoning behind it — then it is quite possible that the AI, upon gaining metacognition, will cling to its current self.

鍵盤快捷鍵Keyboard shortcuts

j 下一段next speechk 上一段previous speech