Thin metrics—the Aladdin’s‑lamp/parasitic‑AI problem in which a “wish” of just a few words can lead to overly broad or creative interpretations by the AI—may lead to sycophancy, addiction, over‑reliance, power concentration, and out‑of‑control risks. These thin vertical alignment risks can be easily rectified by relying on thick horizontal relational fields. Horizontal alignment uses bridging algorithms like Community Notes on X/Twitter—directly inspired by Pol.is. Notes are featured not by majority vote but because they bridge polarized camps–instead of left-wing and right-wing, it identifies an up-wing , or notes that are upvoted by both camps. Recently AI can draft such bridging notes. X opened APIs so you can plug in Mistral, gpt-oss, etc., and get human votes. It’s reinforcement learning with community feedback. The AI is rewarded when previously polarized camps understand each other more—maximizing relational health within a community. When the community heals, the AI’s job is done; it retires. That addresses many failure modes of vertical alignment and existential‑risk talk. This is a promising new paradigm.