
So it's almost like the time when the Montreal Protocol around ozone was being signed. Like people for the first time clearly understood that freon, a specific chemical, is the cause of the depletion of ozone. And then people collectively say, let's commit, even though that we don't have off-the-shelf replacements for Freon, let's commit that a few years from now, everybody will use the new thing that doesn't destroy the ozone. So I think we're at a very similar moment. People are seeing addictive AI as that kind of ozone-depleting thing that strip mine, the social fabric, and kill or at least harm relational health. And people are also seeing that there are social translators, consensus finder, personal kami, relational kami, the local stewards, and so on, the assistive AI as a powerful tool for empowerment. So I think the sooner we can set a standard so that anyone who do not use this kind of assistive AI and insist on building addictive AI will be seen as destroyer of ozone, then the better can we coordinate around this. And interoperability and transparency and the safeguard model that lets community bring their own policy to keep AI honest, essentially,