
After a February filled with back-to-back holidays, many of us have officially returned to the office. What we might not realize is that while celebrating Lunar New Year, a new generation of AI models quietly broke free from solo-tool roles. They can now "socialize" with one another — dynamically assigning tasks and assembling cooperative teams on the fly.

What this means is that today, in 2026, using AI no longer requires a single command per action. Once you learn to redefine your relationship with these "direct reports," as well as understand how to interact with them, you unlock a staggering level of productivity.

AI is no longer a lone wolf tool. It is a team player that knows how to collaborate. In fact, leveraging AI has become an art of orchestration. Trending technologies like OpenClaw and Claude Code Agent Teams give AI powerful lateral coordination, elevating everyday users into "commanders" of AI squads.

For example, when I need to brainstorm topics for my column, all that is required is to hand off the assignment. The backend automatically assembles five distinct agent teams, each with different expertise: one digs into the archive of my past columns for thematic threads; another tracks the latest international trends; a third combs through scientific journals hunting for technical blind spots; a fourth explores power dynamics in the age of digital democracy; and the fifth handles headlines and polish.

These five AI teams meet, debate and research just like humans. In barely five minutes, they deliver a collaboratively produced report — complete with a detailed trail of their deliberations.

The U.S. government's NIST has already begun drafting guidelines for AI-to-AI interaction. These standards are not just about enabling interoperability among models built by different companies — GPT, Gemini, Claude — they are also designed to guard against the risk of groupthink bias when multiple AI systems interact.

This shift is about to upend the rules of competition in the workplace. In the past, your capacity was limited by how many tasks you could personally handle. Going forward, it will be defined by how many agents you can command.

When you lead an AI team, you need to learn to be a gardener. Start with KPIs — the old mindset needs an overhaul. We used to chase 120 percent stretch goals. Now, "80 percent full" is the sweet spot.

Here is why: in a multi-agent game, pushing any single metric to a perfect score tempts individual AIs to hit their target at the expense of the whole. Your job, then, is that of a gardener tending a garden — maintaining ecological balance, maximizing collective benefit and ensuring the AI team's direction stays aligned with human well-being.

This trend also helps us reclaim what makes us fundamentally human. Once AI handles all the busywork, what remains is the irreducible core of human value: curiosity, collaboration and civic care.

AI can never do curiosity, nor can it build genuine human bonds. That is on us.

Given this trajectory, your AI goal for the year should not be chasing maximum performance metrics. It should be learning how to be a good enough gardener — guiding the AI team to follow the rules, fit the context and serve the mission, all while you focus on judgment and human connection. This kind of Civic AI is the mindset most worth carrying back to work.
(Interview and Compilation by Yu-Tang You. License: CC BY 4.0)