
Indeed. So nowadays, when people are developing AI, many frontier labs have this thing called ‘model specification’ or ‘model spec’. And such spec is a way to make a public pledge to the public, to the relevant community, that's what this AI system is trying to do. And also, what it pledges not to do, for example, flattering someone into self-harm, is something that many chatbots have now publicly pledged to not do.