Barry O'Sullivan: ChatGPT is not your friend

Taking advice from something that can't differentiate truth from fiction and tells you what you want to hear isn't the wisest choice, writes Barry O’Sullivan
Barry O'Sullivan: ChatGPT is not your friend

Much like Dan, who sits at the bar in Killinascully dispensing wisdom, ChatGPT merely tells you what you want to hear. It, too, has no problem embellishing the details. 'Jimmy' (Jack Walsh), 'Willie Power' (Pat Shortt), and 'Timmy' (Joe Rooney) in Killinaskully.

Artificial intelligence is a broad umbrella term for computer systems that perform tasks that we think of as requiring human intelligence – and, despite all the fuss about it lately, we’ve been using it for decades. 

There are AI systems that learn, that can process natural language, that can play complex games and be strategic, that can interpret visual scenes and images, and so on. We use AI systems every day. 

Some examples include satellite navigation systems in cars and mobile phones, voice assistants such as Amazon’s Alexa, the features of streaming platforms such as Netflix or Spotify that suggest what you might enjoy next, news items on social media feeds are selected for us using AI, and even the Google search engine itself. AI is ubiquitous in our lives. 

The term itself was coined in 1955 by John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon, in a proposal for a summer project at Dartmouth. An interesting connection to Ireland: McCarthy’s father, John Patrick, was from Cromane in County Kerry and emigrated to the USA where his genius son was born.

Since its widespread release in November 2022, OpenAI’s ChatGPT has probably become the most widely talked about artificial intelligence technology in the world. It is an example of a Large-Language Model – an LLM - which can generate plausible text in response to a question or search query, often referred to as a prompt, on every conceivable topic. 

ChatGPT, and other similar chatbots, such as Google’s Gemini and Anthropic’s Claude, can write very sophisticated answers in terms of content style. For example, one could ask for an essay on the 1916 Rising written in the rhyming style of Dr Seuss, and ChatGPT will generate a response instantly and rather brilliantly. 

ChatGPT is a cutting-edge chatbot, built using state-of-the-art machine-learning AI methods. It has been trained on everything one can possibly read in electronic form: the entire worldwide web, digitally available books, research papers, policy documents, newspapers, and many other electronic materials. In a sense it has read everything humans have ever written and which can be accessed online or in digital form. 

The technical achievement in producing LLM-based chatbots, like ChatGPT, is astonishing. They seemingly have read everything that the entirety of humankind has produced and they can instantly respond with sophisticated and plausible answers to any question we would like to pose. 

Does that mean that these systems have superhuman understanding and expertise? Are these systems truly artificially intelligent? In a word: No.

The problems of chatbots

While ChatGPT and other LLMs can instantly generate sophisticated answers, they suffer from a number of problems. They don’t really understand what they have read. Instead, informally speaking, they have learned that particular sequences of words tend to occur with other sequences of words. 

In the AI world we say that these systems lack a commonsense understanding of the world. Whether the text is true or false means nothing to them. They often generate text that is misleading or downright incorrect. They “hallucinate”, meaning they invent things that are not true. 

There is no harm intended, but because they don’t understand the world or what they are saying, they are essentially “stochastic parrots”, as a well-known research paper has described LLMs.

The analogy I like to use comes from Killinaskully and specifically Pat Shortt’s character Dan Clancy who sits between his friends in Jacksie’s Bar. Ask Dan anything you want. He’ll answer you earnestly and to the best of his ability. He might make up a few details along the way, unintentionally of course.

Maybe you’d prefer him to close his eyes and recite his answer poetically with his hand on his heart and in the style of Padraig Pearse. No problem. While this analogy is somewhat facetious, it demonstrates the challenges that arise when one uses ChatGPT and other LLMs in specific settings.

Barry O'Sullivan: 'If a user believes in a conspiracy theory, for example, the user could use a chatbot like ChatGPT to engage in a dialogue that has the consequence of confirming the user’s beliefs.' Picture: LinkedIn
Barry O'Sullivan: 'If a user believes in a conspiracy theory, for example, the user could use a chatbot like ChatGPT to engage in a dialogue that has the consequence of confirming the user’s beliefs.' Picture: LinkedIn

The purpose of an LLM is to generate plausible text, ideally that the user will engage with so it can be refined further. One can get into a conversation with an LLM by tweaking the original prompt. 

For example, ask ChatGPT to tell you how to prepare a roast chicken, it will respond with detailed instructions, but you might feel that you’d prefer the skin to be a little crispier and articulate that. ChatGPT will offer up a revised response, hopefully a more acceptable one. It is easy to see how one can direct a conversation to get a desired outcome. 

While this is helpful to make sure that we have a good chance of cooking our dinner the way we would like it, if we are asking questions about things that are troubling us, or we’re seeking advice, or we’re trying to get reassurance about our perspective on things, using an AI tool that does not have an understanding of what it is saying can have significant consequences.

AI chatbots, LLMs and systems such as ChatGPT, are being increasingly used to find information about personal matters, offer life advice, or even as personal therapists. The ELIZA effect refers to the tendency to project human characteristics onto chatbots. 

ELIZA was a chatbot developed at MIT in 1966 and simulated a Rogerian psychotherapist by rephrasing statements made by the user into questions which had the effect of prompting the user to offer up increasingly more emotional and personal details.

'AI psychosis'

Recently, there has been a growing focus on ‘AI psychosis’, where users with mental health issues like schizophrenia can have paranoid delusions fuelled by chatbots, although there is yet no clinical literature on this. Chatbots based on LLMs can be prompted by users in a way that increases the chances of inaccurate information being presented to them. 

If a user believes in a conspiracy theory, for example, the user could use a chatbot to engage in a dialogue that has the consequence of confirming the user’s beliefs. An LLM-based chatbot doesn’t understand what it is generating as output, and is trying to find a response that the user will be satisfied with. There is no intentional manipulation at play but, nonetheless, this can be a harmful recipe. 

LLM-based chatbots can be used to re-confirm a harmful perspective that no real person would confirm unless they had a malicious intent. Adding in the narrative that these AI systems are approaching superhuman capabilities can give them a god-like status in the minds of vulnerable users.

AI technology is extremely powerful and impactful, and, therefore, comes with enormous responsibility on those making it available to ensure that it can be used safely and ethically. There is much excitement and hype around AI at the moment. It is important that hype is challenged, that we keep our feet on the ground, and that we maintain a watchful eye on its impacts. 

Gaining literacy in AI is now an important life skill and one of the reasons that under the European Union’s AI Act there are specific obligations on the providers and deployers of AI technology on this very topic. AI, in my opinion, has been an overwhelming positive technology, but we must pay attention to the risks and deal with these matters through technological advances as well as education and literacy initiatives.

  • Barry O’Sullivan is a professor at the School of Computer Science & IT at University College Cork, founding director of the Research Ireland Centre for Research Training on Artificial Intelligence, a member of the Irish Government’s AI Advisory Council, and former Vice Chair of the European High-Level Expert Group on Artificial Intelligence

x

More in this section

Revoiced

Newsletter

Sign up to the best reads of the week from irishexaminer.com selected just for you.

Cookie Policy Privacy Policy Brand Safety FAQ Help Contact Us Terms and Conditions

© Examiner Echo Group Limited