-
In 2025, Taiwan will officially enter a “super-aged society,” where one in every five people will be over 65. Caring for the elderly requires not just assistance but meaningful companionship and attentive listening. AI companion robots have emerged as one potential solution. However, since AI lacks emotions and isn’t human, can it truly gain acceptance in providing emotional support?
The surprising answer is: Yes!
Recently, I joined the Collective Intelligence Project (CIP.org) team to collaborate with market research platforms Remesh.ai and Prolific.com. Together, we surveyed 1,243 participants from diverse backgrounds worldwide in a study titled “Global Voices: AI Futures.” One of the key findings was that participants were open to the idea of AI providing emotional support to vulnerable populations. Contrary to what one might expect, the idea wasn’t dismissed outright.
This outcome challenges a common assumption: that a cold, emotionless AI stepping into emotional roles would generally be unwelcome. In fact, many participants found value in AI offering companionship to elderly or socially isolated individuals. This indicates that people are starting to embrace AI taking on specific supportive roles.
The survey was thoughtfully designed, starting with open-ended questions. Respondents were also shown others’ answers and could choose the ones they resonated with the most. Using AI algorithms, the system adapted by refining follow-up questions based on participant responses or probing deeper into previous answers. This approach allowed for precise, consensus-driven insights to emerge rapidly, even from open-ended inquiries.
One pivotal question in the survey was, “What core values must not be overlooked when developing AI?” The most common response was “Respect.” This encompassed respect for personal boundaries and differences among people. Participants emphasized that even in cutting-edge AI development, mutual respect must be preserved. Innovators should respect not only users but also those affected by AI.
Beyond respect, two other values stood out: empathy and accountability. Participants stressed the importance of accountability—meaning, those responsible for AI must take ownership if negative outcomes arise. These three pillars—respect, empathy, and accountability—are essential principles for the ethical development and deployment of AI.
So, does AI companionship for the elderly risk violating these core principles, leading to disrespect, lack of empathy, or irresponsibility? Given the ongoing evolution of AI, we cannot yet draw definitive conclusions. What we must do now is not passively wait for AI to test these values but actively monitor AI algorithms to ensure they align with societal expectations for companionship.
Tools like Remesh.ai and other broad-listening platforms are crucial in gathering consensus on these societal expectations. Constant feedback and course correction are necessary to ensure that AI stays on track. The goal is to prevent AI from following the trajectory of social media platforms—where only after 10 or 20 years do we realize something went astray, requiring enormous efforts to correct it.
-
(Interview and Compilation by Hsin-Ting Fang. License: CC BY 4.0)