Sarah Harte: Who needs Russia sowing confusion when AI is already doing it

While it’s histrionic to say that AI risks turning us into dribbling dumbos who swallow any old rubbish as the truth, it pays in all facets of life to have a questioning mindset, writes Sarah Harte
Sarah Harte: Who needs Russia sowing confusion when AI is already doing it

Reports are conflicted as to whether Agent Cobalt is in fact sowing confusion or influencing narratives, but sure, we have AI doing just that for us anyway.

One intriguing and at times funny story at the moment is the possibility that an Oireachtas member codenamed ‘Agent Cobalt’ is being investigated by An Garda Síochána for being a Russian spy. 

As reported earlier this week, the alleged value of this agent is not to access classified material but to “influence narratives, delay policymaking processes, and sow confusion” in political and security structures. The plot thickens.

Reports are conflicted as to whether Agent Cobalt is in fact sowing confusion or influencing narratives, but sure, we have AI doing just that for us anyway.

I’m sure the evangelists who are lining up to tell us how AI will solve complex problems in areas as diverse as climate change mitigation, finance, healthcare, and education will shake their heads. 

Certainly, AI brings its benefits, including cutting down on drudgery and grunt work, saving time and therefore money, and allegedly freeing us up as humans to do more human-led things. Sounds great.

Russian president Vladimir Putin. One nagging question I have is, who’s currently shaping truth and culture, and to whose benefit?File photo: AP/Julia Demaree Nikhinson
Russian president Vladimir Putin. One nagging question I have is, who’s currently shaping truth and culture, and to whose benefit?File photo: AP/Julia Demaree Nikhinson

However, to sift the wheat from the proverbial chaff and work out what is authentic and factually accurate in a post-truth tech-led AI environment, you need some measure of critical thinking skills. From where I’m typing, it stretches credulity to believe that AI is not having an impact on those critical thinking skills, but not everyone agrees.

Last week, I was chewing the fat in the sun when I mentioned I would be doing a podcast on AI and misogyny in September. One Zoomer piped up that while that sounded promising, I should be aware that members of Gen X who commented on AI often sounded clueless. 

The implication was that Zoomers cringed when they heard us sounding off on the subject. I accept it’s a generational responsibility to think that elders are hopelessly naïve and out of date. 

I also concede that, as a non-digital native who had an abacus when she was young and considered the gift of a Casio calculator exciting, my lens is shaped by that. Not quite going to bed with a gas lamp, although from a digital native's perspective, it equates to that.

I made my counterpoints politely. What I itched to say was that the craggier among us grew up in a time when we routinely read long books with small print and didn’t need an assurance that an article was only a four-minute read. 

OpenAI CEO Sam Altman. If we are to harness AI as a helpful tool, we will evidently have to teach critical thinking. File photo: AP/Lee Jin-man
OpenAI CEO Sam Altman. If we are to harness AI as a helpful tool, we will evidently have to teach critical thinking. File photo: AP/Lee Jin-man

One of the benefits of our apparently comically antediluvian upbringing was that we had to visit the library and look things up when we wanted an answer to something. We used indexes and learned how to search using our own grey matter. Along the way, we developed skills to reason independently.

Am I advocating a return to hand-written card indexes and abacuses? No. But I remain unconvinced that automatically jumping to the AI chatbot for an answer doesn’t bring cognitive costs.

AI study

Studies examining the impacts of AI are ongoing. A bit like with Agent Cobalt, the jury is still out. However, a fascinating new study from the American university MIT, admittedly tiny and not yet peer-reviewed, raises concerns.

The study asked 54 university students to write several essays over several months. One group used ChatGPT. The second was allowed to use Google. The third used their brain. The neural activity of the students was monitored, and the essays were also critiqued and ranked by English teachers.

The study found that the group who used Chat GPT became lazier as the months progressed, as their neural activity scaled down, so by the end, in between grunts (the grunts are my insertion and are not actually in the study), they were reduced to copying and pasting things. 

Also, their essays were, surprise, surprise, very homogeneous, lacking original thought and delivered up the same thoughts and expressions.

Those who only used their brains were the most curious, most engaged, and showed the highest neural connectivity, and wrote essays that they were more satisfied with. Those who used Google, but also their brains, also showed active brain function.

One of the conclusions of the study is that the usage of LLMs could harm learning and long-term brain development, most especially for young users. So, while it’s true that using Chat GPT, you get an answer with minimal effort (often inaccurate as things stand), meanwhile, critical thinking, creativity, and problem-solving potentially take a bath.

AI accuracy

Let’s zoom in on the accuracy thing. LLM usage has tangible negative effects when its users are young and have not been taught to critically evaluate online information.

Last week, it was reported in the Irish Examiner that school pupils are widely falling for myths and misconceptions around sexual health and contraception because, according to a new study, people are relying on AI and social media for information. The large study, Debunking the Myths, was carried out over three years with more than 17,700 students across 166 schools.

Almost two-thirds of pupils questioned as part of an education programme believed that using contraception can lead to infertility. Another 39% mistakenly thought it was unsafe to take the pill without a break.

Fergal Malone, consultant obstetrician and the head of the RCSI’s department of obstetrics and gynaecology, said when you “rely on ChatGPT or Meta AI or any of these, you are assuming the background information they are providing you with is accurate”. 

As co-lead on the study, he points out that AI is simply echoing inputs from other sources like TikTok.

Contraceptive advice from the vast army of undereducated, over-confident influencers to cognitively underdeveloped young people via AI. You can see the problem here. The potential birth of babies called Meta AI Murphy or Chat GPT O’Brien.

AI chatbots also sometimes “hallucinate”, producing false results based on patterns they identify rather than produce factually verified results. Nor do they constantly update. That is something we need to broadcast. 

If we are to harness AI as a helpful tool, we will evidently have to teach critical thinking.

We also need to regulate the use of the technology, so it integrates meaningfully into our lives. One nagging question I have is, who’s currently shaping truth and culture, and to whose benefit? 

Might it be that obscenely rich white tech bros, their monopolistic companies and those who program the models for them are exerting unprecedented influence under the guise of progress?

Stretching into the realm of conspiracy theory or a distinct possibility? On that note, I’m looking forward to discussing the use of biased, misogynistic, and racist AI data sources next month on a podcast with Irish Examiner business journalist Emer Walsh.

While it’s histrionic to say that AI risks turning us into dribbling dumbos who swallow any old rubbish as the truth, it pays in all facets of life to have a questioning mindset. It pays to interrogate the problems, particularly during a gold rush.

I’m paraphrasing heavily here, but as the late mathematician, scientist, and philosopher Dr Jacob Bronowski might have said (he died in 1974), we need to bring a certain “ragamuffin barefoot irreverence” to the technology and ask searching questions; our role should never be to simply accept.

x

More in this section

Revoiced

Newsletter

Sign up to the best reads of the week from irishexaminer.com selected just for you.

Cookie Policy Privacy Policy Brand Safety FAQ Help Contact Us Terms and Conditions

© Examiner Echo Group Limited