Chatbots tell people what they want to hear, even when harmful, posing 'insidious risks', scientists warn

Chatbots tell people what they want to hear, even when harmful, posing 'insidious risks', scientists warn

Chatbots continued to validate views and intentions even when they were irresponsible, deceptive or mentioned self-harm, research found.

Turning to AI chatbots for personal advice poses “insidious risks”, according to a study, showing the technology consistently affirms a user’s actions and opinions even when harmful.

Scientists said the findings raised urgent concerns over the power of chatbots to distort people’s self-perceptions and make them less willing to patch things up after a row.

Already a subscriber? Sign in

You have reached your article limit.

Unlimited access. Half the price.

Annual €130 €65

Best value

Monthly €12€6 / month

More in this section

Lunchtime News

Newsletter

Keep up with stories of the day with our lunchtime news wrap and important breaking news alerts.

Cookie Policy Privacy Policy Brand Safety FAQ Help Contact Us Terms and Conditions

© Examiner Echo Group Limited