• Earlier this month, Meta CEO Mark Zuckerberg announced that the company would discontinue its third-party fact-checking program in the United States and, taking a cue from social media platform X, pivot to a “community notes” model to address misinformation.

    Two main factors underpin this decision. First, for years, large numbers of U.S. users have complained about the mechanism’s lack of transparency. In particular, political posts have frequently had their reach limited, fueling long standing dissatisfaction among many Trump supporters.

    Even more significant is the stance taken by Brendan Carr, the FCC chair appointed by Trump. Late last year, Carr publicly called out several tech giants, including Meta and Google, essentially stating: “Your platforms enjoy immunity when moderating content because of Section 230 of the Communications Decency Act. This immunity assumes the platform operates in good faith. But if you outsource moderation to specific organizations and keep the process opaque, can that still be called a good-faith operation?”

    Following Trump’s election, Meta moved swiftly to adopt a more transparent approach to managing controversial information. The model it hopes to emulate is that of X, which uses a community notes system to navigate the highly polarized topics surrounding the U.S. election.

    Under X’s approach, numerous volunteers annotate contentious posts; each volunteer represents a distinct perspective. They upvote or downvote each other’s annotations based on their own viewpoints. Whichever annotation gains broadest approval from opposing sides is pinned beneath the original post—where it remains even if the author of the post disagrees with it.

    For a long time, the algorithms behind social media platforms have exacerbated divisions by reinforcing echo chambers. The advantage of this new mechanism is its use of “bridge-based ranking,” which encourages users with differing views—on issues such as immigration or gender—to seek mutual support, thereby enhancing understanding across different communities.

    Some may worry that, even if platforms successfully prevent the creation of fake accounts, supporters of a particular viewpoint could still mobilize to manipulate the system. However, X’s experience shows that such efforts have limited impact. After all, to “game” the system, one would need to craft annotations that both sides would endorse. If someone manages to do that, they are effectively promoting intergroup understanding—an altruistic act, rather than disruptive behavior.

    In my view, operating a social media platform should not be a matter of counting heads. If the side with the larger “volume” of voices simply wins, then those willing to pay can easily buy all the exposure they want. But earning genuine agreement from the other camp is not something money can buy.

    Looking ahead, social platforms should do more than chase engagement and retention. They ought to expand the application of bridge-based ranking and leverage AI to synthesize insights from all sides. That way, even if different camps cannot fully align, they can at least understand where one another’s perspectives come from. Only through such efforts can a polarized society build bridges of communication and move toward a more resilient future.

  • (Interview and Compilation by Yu-Tang You. License: CC BY 4.0)