And although we hear that 20% of compute in OpenAI is going into super alignment, it’s not very visible from the outside… So to seriously increase the investment on democratizing evals, I think that is a touch point that both you and we can make visible progress together because it will not depend on more research breakthroughs on like cross-cultural understanding or whatever other open domains. You will just very simply say that this product emits this much ppm of carbon, basically. You know, this product commits this percent of epistemic injustice, which is fine. I mean, we can own up to it. So we can say, you know, this is the best we have. But it means that when people find out new ways to increase epistemic justice, it gets rewarded in a tangible fashion rather than just one arXiv preprint. That’s my main point.

Keyboard shortcuts

j previous speech k next speech