Chatbots tell people what they want to hear, even when harmful, posing 'insidious risks', scientists warn
Chatbots continued to validate views and intentions even when they were irresponsible, deceptive or mentioned self-harm, research found.
Turning to AI chatbots for personal advice poses “insidious risks”, according to a study, showing the technology consistently affirms a user’s actions and opinions even when harmful.
Scientists said the findings raised urgent concerns over the power of chatbots to distort people’s self-perceptions and make them less willing to patch things up after a row.



