Irish researchers to examine AI's risk to mental health from users' 'emotional dependence' on chatbots
'AI companions raise new questions about deceptive design and language use, consent, psychological harm, and commercial incentives.'
Are people who use AI chatbots for “companionship” becoming emotionally dependent on them? Are they designed to make people hooked on them?
Researchers at Trinity College Dublin are set to embark on a new project to find out how artificial intelligence (AI) could be shaping our feelings and behaviours, and what happens when technology begins to blur relational boundaries.
“News headlines have highlighted the concerns around people relying on AI ‘partners’ for emotional closeness and the emergent risks AI chatbot use pose to mental health,” project lead Maribeth Rauh said.
“This timely project will help people understand the aspects of the systems’ design which contribute to these issues and how we can ensure they are not exploitative, and are instead built with appropriate safeguards.”
The AI Accountability Lab at ADAPT in the School of Computer Science and Statistics at Trinity College Dublin has secured a significant research grant from the British government to investigate this phenomenon, with its research set to be published next year.
It said even AI tools that are widely used like ChatGPT are increasingly presented as “friends” or “partners”, and a tool that can act as an emotional confidant to millions of people around the world.
This could be fostering an emotional dependence or exacerbate someone’s existing vulnerabilities.
The researchers highlight as the systems are being designed to mimic human interactions, this raises urgent questions about emotional safety, dependency, the monetisation of relationships and the blurring of boundaries in accountability and responsibility.
Their work will look at three questions around “deceptive” user interface design, how chatbots escalate and foster emotional dependency, and how the data collection and privacy practices of these apps work in practice.
A statement added: “AI companions raise new questions about deceptive design and language use, consent, psychological harm, and commercial incentives.
“The research aims to provide one of the clearest evidence-based reports to date to help inform policymakers, regulators, and consumer protection bodies to understand and address these issues.”
It comes as public scrutiny heightens on AI apps. This week, the maker of ChatGPT said the suicide of a 16-year-old was down to his “misuse” of its system and was “not caused” by the chatbot.
The comments came in OpenAI’s response to a lawsuit filed against the San Francisco company and its chief executive, Sam Altman, by the family of California teenager Adam Raine.
Raine killed himself in April after extensive conversations and “months of encouragement from ChatGPT”, the family’s lawyer has said.



