Audrey Tang

There’s an outstanding bet on Manifold, I believe… https://www.isattentionallyouneed.com/ or something like that. And so it basically says, in two years, there’s, I think, around fifty-fifty chance that we are able then to build state-of-the-art models without depending on the unobservable, almost uninterpretable transformer attention model.

鍵盤快捷鍵Keyboard shortcuts

j 下一段next speechk 上一段previous speech