This summer, I was invited to join the Oxford’s Institute for Ethics in AI as a Senior Accelerator Fellow. What concerns me most is this: as AI’s speed vastly outstrips our own, how should future social structures adapt?

The U.K. government is actively introducing AI into policy, education, social welfare, and other public‑service workflows to streamline staffing and cut waste—projected to save roughly NT$1.85 trillion a year. That push has sparked debate. When the public sector uses AI to review cases and grants, people expect it to purge private interests and deliver absolute impartiality. Yet when it confronts human suffering, can it retain warmth—and make decisions that balance compassion, reason, and the law?

When policy chases a single metric, it easily hardens into institutional callousness. If a company’s AI service is poor, consumers can switch providers; but when government power misfires, citizens have far fewer ways to hold it to account.

The Netherlands has already lived through a tragedy of AI governance. The tax and customs authority tried to use algorithms to screen childcare benefits, treating foreign‑sounding names and dual nationality as fraud risk indicators. The result: more than a thousand children were wrongfully removed from their families, and tens of thousands of low‑ and middle‑income households were scrutinized, falsely accused of fraud, and ordered to repay benefits they were legally entitled to.

AI learns extraordinarily well. Given a quantitative target, it often finds quick shortcuts—methods humans wouldn’t think of—to hit that target, without considering the ethical fallout along the way. So the yardstick for AI shouldn’t be “efficiency” alone. It should be trust-under-loss: whether people, even when receiving an unfavorable outcome, can still accept the process as fair.

Social‑platform algorithms are a case in point. Publishers who want to broaden readership may work on better headlines and timely angles. But machine‑learning systems boost controversial posts, amplify ad hominem attacks, and surface baiting content—because the machine, absent ethics, chases traffic. This single‑metric optimization around “engagement rate” ultimately corrodes the quality of public discourse and mutual trust.

Take the Netherlands again. Before cutting a family’s benefits, AI could be required to consult the social workers who support those households and verify the facts. When a system’s goal isn’t just saving money but also sustaining public trust, AI must be tasked with devising ways to earn public consent.

Taiwan’s Judicial Yuan now operates an AI‑assisted Sentencing Information System. In clear‑cut cases, the machine provides a sentencing range; judges then decide whether to mitigate or aggravate. The virtue of this system is transparency. Parties, lawyers—indeed anyone—can try out the calculations. If bias is detected, it’s easy to report, so people needn’t surrender liberty or property to a black‑box standard.

Even so, not everything in human society can be delegated to machines. The sentencing tool also supports Taiwan’s newly launched citizen‑judge system, enabling consensus‑based decisions and lending heavier sentences greater legitimacy. On matters touching life and dignity, human caution and empathy remain irreplaceable.

As AI accelerates, the question isn’t whether to use it but what to use it for. The answer is clear: we don’t need a superintelligence that prizes efficiency above all and replaces humans. We need a collaborative partner that helps everyone, together, amplify our collective intelligence.

Keyboard shortcuts

j previous speech k next speech