Sarah Harte: We struggle to keep up as artificial intelligence technology surges ahead
Fake news? In a tweet, Kuwait News announced Fedha as 'the first broadcaster in Kuwait that works with artificial intelligence'. You can see the short clip of the AI-generated presenter in their tweet below.
Increasingly, AI is being used in education. In the last month, Pat Hickey, a history teacher from Mallow, Co Cork, spoke to Education Correspondent Jess Casey about AI chatbots revolutionising the approach both to teaching and learning.

What happened to Pierre is the kind of tragic consequence that some in tech are worried about. It’s why many AI researchers have been vocal against using AI chatbots for mental health purposes or for giving counselling.
Perhaps an existential question for our age is whether we want computers to confront genuine human problems in human terms. Leveraging computers to mimic the problem-solving capabilities of the human mind is one thing but maybe human empathy should be kept as a human prerogative. A world with synthetic relationships may not be what we need so this feels like a pressing ethical question.

Recently it has been said that once a certain threshold is passed, there’s no predicting what AI will do. Last week, reported that “researchers say Microsoft and Google are taking risks by releasing technology that even its developers don’t entirely understand”. An interesting view is that sentiments such as these are deliberate ‘apocalyptic doomsaying’ by technology companies filtered through the media they are manipulating.
Two weeks ago, as reported in the , an open letter signed by influential members of the AI community, including cyber-libertarian Elon Musk, asked for a six-month moratorium on what they termed the “dangerous” AI race in order to avoid “loss of control of our civilisation”.
This ‘doomsaying’ prompted a mixed bag of responses. Some lauded the letter as a way to draw attention to the need to set shared rules around the development of AI. Others came out of the traps accusing the signatories of fearmongering around technology.
A pertinent question is how come Elon Musk is being so selfless.
While Musk is touted by some (particularly by his mother) as being a genius, the idea that the human race and its best interests are his priority seems a stretch.
One conceivable answer to the Musk question comes from the Distributed AI Research (DAIR) Institute which sets out to study, expose, and thereby prevent AI-associated harms. This group includes Timnit Gebru and others who were pushed out of Google for a paper criticising the capabilities of AI which did not square with the corporate message.
The DAIR Institute’s theory is that the open letter asking for the moratorium is actually a cynical marketing ploy on the part of private commercial interests keen to hype the potential of AI in a backhand way. They have pretended to go cold on AI in order to provoke companies to climb on board in a case of what they hope will be corporate fomo, or fear of missing out.
It’s a well-informed view by publicly-minded people who have the technological chops to understand AI.
The message from the DAIR Institute and others essentially seems to be that we need to be nervous about AI but for the correct reasons. If we understand the real risks, we can attempt to regulate AI properly.
Fake news is a looming problem springing from AI. News organisations will increasingly struggle to authenticate content never mind the average person who will have to confront the problem of inaccurate but authoritative-sounding news.
While AI brings with it huge creative possibilities, intellectual property infringement is another AI-related problem that needs addressing.
American artists have already brought a class-action lawsuit claiming that AI systems were trained on their work.
The hazard of cybersecurity attacks and the privacy risks of chatbots mining data stored in various locations around the internet is a peril that should not be downplayed.
Last week, in the tech magazine , Sasha Luccioni, a researcher who works at the intersection of AI and the climate, said that “regulatory authorities across the world are already drafting laws and protocols to manage the use and development of new AI technologies”.
She cited the critical need for transparency around the rules regulating AI, adding that companies developing AI models must “allow for external audits of their systems”.
The harsh reality is that while AI ‘progress’ is tearing ahead, legislation lags behind. The US which is the frontrunner in AI development has no appropriate legislation.
Who knows what regulation looks like in China where democratic scrutiny could be said to be weak? The UK does not have the requisite legislation to deal with AI.
Thankfully the EU is bashing on with its AI act, but not fast enough for the lightning-speed technology.
It doesn’t take a cynic to know that some countries won’t implement strong governance around AI to protect their citizenry from Big Tech because they are engaging in something comparable to a modern-day version of the arms race, where AI leadership is the goal.
And saying that corporate ethics must do anything doesn’t make it so. The existence of an ethics team in a company is not a signal of corporate responsibility and to think so is naïve. Also, it has been widely reported that tech companies are breaking up their technology ethics teams or diluting them because they don’t like the answers they are being given.
أول مذيعة في #الكويت تعمل بالذكاء الاصطناعي
— كويت نيوز (@KuwaitNews) April 8, 2023
• #فضة.. مذيعة #كويت_نيوز الافتراضية
• ما هي نوعية الأخبار التي تفضلونها بتقديم #فضة زميلتنا الجديدة؟ .. شاركونا آراءكم pic.twitter.com/VlVjasSdpb
As tech users, it’s not a binary choice between being a Luddite or climbing unthinkingly onboard. So many of us, like monkeys, can
figure out how to use technology, while not necessarily understanding the significant downsides.
It took us far too long to bring the tobacco companies to heel before health warnings were mandatory on cigarette packets. Therefore, with chatbots, explicit mandatory warnings that are clear and easy to understand are called for. Plus, a public awareness campaign of the downsides of AI to facilitate critical engagement with the technology.
Our lives are worth more than a social-psychological experiment or a corporate race to bring AI to the market, yielding lucre for tech titans who have already shaped our society in ways we never dreamed of.





