So, for example, in summarization, what’s important is for it to not hallucinate. If it needs to generate a headline from a news article, it should not say things that’s actually not in that news article and then, if the model hallucinates, it turns out that its thought pattern is different as when it’s copying one part of the news article. So, if the model runs locally on your laptop, on your desktop, then, unlike a supercomputing data center in which you do not have access to the inner working of the model weights, in your own computer, of course you have access, so you can start instrumenting, you can start looking at its hallucinating patterns. And then a lot of people would be interested in figuring out what does it mean for a model to hallucinate, what does it mean for a model to make things up. And then they can contribute to the horizontal race which is getting more and more people into their hands the tools to inspect models, to see whether they’re hallucinating, whether they develop some harmful intent, whether they’re scheming and so on.