Got it. And I’m guessing in that sense, in the current scenario, keeping the speed of AI development the same in some sense, while at the same time working towards more safety questions in the way that they are progressing is what you think would be the right way.