Chatbot users should not share private information with software, expert warns
Professor Michael Wooldridge said users should assume any information they type into ChatGPT or similar chatbots is âjust going to be fed directly into future versionsâ (Alamy/PA)
Users of ChatGPT and other chatbots should resist sharing their private information with the technology, an expert has warned.
Michael Wooldridge, professor of computer science at Oxford University, said complaining about personal relationships or expressing political views to the artificial intelligence (AI) was âextremely unwiseâ, the Daily Mail reported.
Prof Wooldridge will deliver the Royal Institutionâs annual Christmas lectures on the BBC this week, with a focus on AI and help from the worldâs first ultra-realistic robot artist, Ai-Da.
Speaking about finding personalities in chatbots, Prof Wooldridge said: âIt has no empathy. It has no sympathy.
âThatâs absolutely not what the technology is doing, and crucially itâs never experienced anything.
âThe technology is basically designed to try to tell you what you want to hear â thatâs literally all itâs doing.â
Prof Wooldridge said users should assume any information they type into ChatGPT or similar chatbots is âjust going to be fed directly into future versionsâ, and it was nearly impossible to get data back once in the system.
Ai-Da can create drawings, performance art and paintings and sculptures, and Aidan Meller, director of the Ai-Da project, said developments with AI would create âseismic changesâ across industries in the next four years.
He told BBC Radio 4âs Today programme: âAI is incredibly powerful â itâs going to transform society as we know it, and I think weâre really only at the very beginning.
âWe have these explosions of development, things like ChatGPT that people know about, but in actual fact as more and more people get to grips with it, we think that by 2026 or 2027 thereâs going to be a seismic change as AI is in all industries.â
Mr Meller said the medium of art allows scientists to discuss and study issues around AI without the risk of any threat to humans because it is benign.
Talking about the Royal Institution lectures, he said: âI think AI is going to enable us to have very fake situations, and weâre not going to know whether theyâre fake or not â that is where lies the problem.
âWe donât know what weâre dealing with, and we hope that these lectures by the Royal Institution are going to be able to really open that topic up.
âRemember weâve got the elections next year, very worrying times for things that are fake and not fake, so in actual fact it is a very serious matter.â
Mr Meller described 2024 as âa very big yearâ for AI, with the fifth version of ChatGPT set to be released which will be able to make actions rather than just act as a text-based editor.
He explained: âYou could say to your phone âCan you book me the restaurant on Monday at seven?â ChatGPT Five will be able to phone up the restaurant, speak to them audibly, say âHi, Iâm trying to get an appointment for sevenâ and book it for you, and then come back to you and say âWeâve now done thatâ. Can you imagine how thatâs going to be useful in business?â
Mr Meller also hailed progress in the Metaverse â an augmented reality platform created by Facebook parent company Meta â as a huge development in 2024.




