And the reason why is LLMs, or generative AI in general, do the most damage when they destroy the mutual trust that is existing in the society, on the actor level, when you pick up the phone and you hear your friends and family’s voice, but it’s actually a voice clone bot that want you to buy some cryptocurrency that is destroying the trust on the actor layer, not the content layer, by shape shifting attacks that can carry out intimate conversations, addictive ones, even with many people, in whichever voice they do, we already see around the world that this kind of robocall scams and so on, if you hear each individual conversation, those scam bots, their quality is not higher than the artisan human made ones, but it scales across cultures.