• Recently in Taiwan, influencers have discovered their photos stolen and used to create clone accounts with fabricated life stories. As AI-generated video and audio tools become widely accessible, the barrier to identity theft has essentially vanished. Now, a single photo is all it takes to send someone's digital clone traveling the world—or even speaking on camera.

    What's particularly striking about this wave of fraud is who's being targeted: not just celebrities, but "micro-influencers" with fewer than ten thousand followers.

    In Taiwan, people have developed strong defenses against the "long-lost classmate" who suddenly asks to borrow money. But when it comes to small-time influencers who share daily life updates and occasionally chat via DM, our guard drops. Bad actors exploit this vulnerability, using AI clones to groom accounts over time and build a genuine sense of connection. When an account you've followed for three years—one that seems authentically human—suddenly starts discussing investment opportunities or weighing in on political issues, it doesn't register as advertising or manipulation.

    This is the most dangerous application of AI clones: not just financial fraud, but narrative engineering. The real threat isn't any single piece of misinformation—it's the use of countless clones to manufacture the appearance of organic public discourse. When multiple accounts you recognize all start discussing the same topic, you mistake coordinated messaging for social consensus.

    We need systemic change on the scale of how we solved the spam email problem: advanced technology that enables content provenance, ensuring that only content genuinely posted by a person carries an unforgeable digital signature. Laws should also require platforms to open non-personal data to third-party auditors who can identify coordinated bot networks.

    Yet crisis often brings opportunity. If we can build the infrastructure to verify authentic identity, AI clone technology might transform from a tool of deception into a bridge for communication.

    If scammers can use clones to bypass our psychological defenses, why can't we use "authorized clones" to break through society's echo chambers? This is the concept of the "Shiny Version" clone—a nod to the rare, alternate-colored variants in Pokémon. In the real world, our true selves carry labels—you're "green," I'm "blue." These labels become walls that block dialogue. But if we could create AI clones authorized by our real selves, with different personality settings, they could function as sophisticated "social translators." People on opposite ends of the political spectrum might use gentler "shiny version" clones to engage with communities on the other side.

    The recent "We the People 250" AI dialogue experiment demonstrated exactly this: when Americans from opposing camps interacted in a de-labeled environment, they discovered an astonishing 97% consensus.The digital world of the future may become a stage where multiple clones coexist. First, we use technology and law to establish standards for authenticity—then we can confidently deploy clones to facilitate genuine connection. Rather than fighting a painful defensive battle against identity theft, we should proactively master these tools and turn "authorized clones" into opportunities to pierce through prejudice and connect with one another.

  • (Interview and Compilation by Yu-Tang You. License: CC BY 4.0)