Artificial intelligence or active imagination with ChatGPT? 

When a journalist and historian attempted to see if his own research would be sourced by AI, he was intrigued by some of the responses he got from a text-generating artificial chatbot
Artificial intelligence or active imagination with ChatGPT? 

People are going to use the latest technology to further their education and enhance their work so it is vital to see what the benefits and pitfalls of OpenAI are.

The revolution will not be live-streamed… in fact, it may be made up.

This, dear reader, is a cautionary tale on how what ChatGPT generates cannot be taken on face value.

In recent days, Education Minister Norma Foley has raised the emergence of AI and the need for students to be equipped in the “new reality”, with AI “a force to be reckoned with”.

In truth, she was somewhat behind the curve, because AI tools such as ChatGPT are already being used in Irish schools, both by students and teachers. My wife, a secondary teacher, has been something of an early adopter both in experimenting with it and training in it.

Its potential as a research assistant, which is what I tend to call it at home, is enormous — summarising events in bullet point or digestible paragraphs right out of the box, malleable enough to present the scaffold of essays and notes with sufficient guidance (and it needs a lot of specificity). But where it falls down a lot is including sources. You can get, say, the characteristics of the space race but it won’t necessarily give you dates or even where it got the information.

Ask it to cite its sources, and ChatGPT will give a caveat about limited information and then give you some names and publications, presumably ones that are open enough online that they can be scraped by a search.

But that doesn’t mean the sources are real, so the level of fact-checking required is still quite high, and it’s one of the things my wife highlights to students (they’re going to use it, so they may as well be guided on using it well).

The Guardian found recently that ChatGPT was inventing Guardian articles, so the danger of inadvertent misinformation is very real, as if we weren’t dealing with enough of that in the world already.

I have poked ChatGPT quite a few times about various things, usually to see if I can get it to crash (I have). It’s been useful in gathering some background material for genealogical research, for example, information to do with the era my ancestors lived in rather than anything specific to them.

So I asked it about something even more esoteric.

This is where I need to make a confession. I’m not only a historian by training, but a medieval historian at that (no, not Henry VIII). I wear it like armour but I see the way you’re looking at me.

Getting a bead on Bede on ChatGPT

A few years ago I finished a doctorate on something super-specific: How one historian 1,300 years ago, Bede, drew on fears about the end of the world to write history. Basically: “Get your houses in order folks, the end times are coming — rejoice! But no, seriously, cop on with the sin or God will smite you, look at what happened in history.”

That cheery note struck, I wanted to see what ChatGPT could bring to the table on Bede and the end of the world, mostly because it was so hyper-niche I would be able to spot if it was spoofing an answer. And it gave a serviceable if vague enough answer, with broad strokes and not a source or supporting quote in sight.

Enter my attempt to break it.

“And where would you put David O’Mahony’s scholarship in this?”

I expected it either to stall or respond that it didn’t have the knowledge to answer. But it puffed out an answer that, I won’t lie, was a boon to the ego.

“O’Mahony’s work on Bede’s eschatology has been praised for its meticulous attention to detail and its careful analysis of sources,” ChatGPT purred, if binary bits could purr. It then offered a few highlights that definitely gelled with my PhD and a few conference papers that I had online, so that at least told me it could access and digest UCC’s Cora resource and stalk me through cyberspace. The thesis has been downloaded hundreds of times so it’s… out there living its own life. But overall validation, sweet validation from the machine! It’s what every scholar looks for deep down.

So I fired my last torpedo.

Cite your sources

“Cite your sources please,” I said, because manners cost nothing.

“As an AI language model, I don’t have access to a specific list of sources, but I can provide a general list of resources that scholars and researchers commonly use to study Bede.”

And there it was, my name. But I had definitely not written an article for that prestigious journal, nor had chapters published by anthologies. Was there, by an unbelievable coincidence, a historian with the same name as me in the same field? I had felt that sense of creeping imposter syndrome before, when I came across a guy working on what looked like the very same thing as me (he wasn’t and, as it turned out, he ended up the only other specialist suitable to evaluate my thesis).

When I checked the publishers’ catalogues, it became clear that not only was nobody by my name publishing with them, but that the articles themselves didn’t exist. The titles looked like something I’d write, but they were phantoms, electronic ghosts. ChatGPT had generated what it thought I would like to read, based on the subject, and it had pulled it out of its binary backside.

What’s the moral of the story here? Seeing is believing, for one. And there’s no substitute for actual expertise for
another. While tools like ChatGPT have huge potential, and are certainly great for pulling together background notes, they are as fallible as their creators.
Unless they made those people up too.

David O’Mahony is Irish Examiner assistant editor and a historian

x

More in this section

Revoiced

Newsletter

Sign up to the best reads of the week from irishexaminer.com selected just for you.

Cookie Policy Privacy Policy Brand Safety FAQ Help Contact Us Terms and Conditions

© Examiner Echo Group Limited