Gareth O'Callaghan: Even the godfather of AI is warning us about artificial intelligence now
From Grok nudification to as-yet unknown perils of 'artificial general intelligence', Geoffrey Hinton says AI is 'extremely dangerous'
As technology outpaces regulation, we may not be too far away from a future in which AI may supersede us ordinary, flawed human beings. Picture: iStock
What would happen if humans were no longer the most intelligent beings on earth? What if the thing that replaced us viewed us as nothing more than expendable colonies of worker ants?
Geoffrey Hinton, the pioneering computer scientist often called the âGodfather of AIâ, told BBC Newsnight just last week that the technology he helped to build now fills him with deep regret.
âIt makes me very sad that I put my life into developing this stuff and that itâs now extremely dangerous and people arenât taking the dangers seriously enough,â he said.
Hinton was responsible for developing the foundations of todayâs AI systems.
If we still joke about the notion that AI â letâs call it âthe machineâ â could surpass human intelligence within two decades, then weâre doing a dishonour to our own rapidly-diminishing intelligence.
According to Hinton, we are producing tools that are more intelligent than we can ever hope to understand.
âThe idea that you could just turn it off wonât work,â he warned.
AI pioneer Geoffrey Hinton: 'It makes me very sad that I put my life into developing this stuff and that itâs now extremely dangerous and people arenât taking the dangers seriously enough.' Picture: Chris Young/The Canadian Press/AP
âIf we create them so they donât care about us, they will probably wipe us out.â
To be honest, I found it difficult to sleep after watching his interview.
We only have to look at social media right across its platforms to realise just how much AI has deepened social instability.
The good old days of Twitter
Look at X. Its forefather, Twitter, would have celebrated its 20th birthday in a few weeksâ time. For those who still miss its concept days, it was once a noble platform of real human intelligence and debate.
Who remembers the work of profundity that was the original tweet; of counting your characters to a limit of 140 while attempting to perfect a readable piece of prose that might engage other like-minded tweeters.
There was a level of cognitive logistics and educated intelligence about the task that has long disappeared, leaving in its wake a global toilet.Â
Covid changed Twitter. It hadnât yet become the global rightâs supercharged house of madness but it wasnât far off.
Disinformation and lies
In early 2020, it became the nest of the conspiracy theory, whose seeds had been sown shortly after the Charlie Hebdo attacks in Paris in 2015, when highly politicised and polarised groups started to test the central position of mainstream media on the platform.
Covid took it to a whole new level of indoctrination.Â
What about this 2021 tweet from an antivax mother, which read: âUrine and faeces of the vaccinated should have its own sewer to avoid traces getting into the drinking water affecting those of us who donât buy into lies.â
I wanted to reply and ask her views on blood transfusions, and whether sheâd accept blood from someone who had been vaccinated if it meant saving her life.
She was just trailing a conspiracy theory, a blatant lie spread at a time when many people couldnât tell the difference between truth and lies.
Gibberish and rage baitÂ
Scrolling through X these days is like wading through a sewer of anarchic gibberish and malicious evil inhabited by creatures with weird usernames who youâd never want to meet.
Reading some of their highly offensive tweets makes you feel the urge to brush your teeth just to be rid of the rancid aftertaste of pure hate of one human for another.
Are they all AI? And if so, is this what the future holds, more generally? If only what was once a platform for debate and insight hadnât been allowed to die such a slow death.
What finally killed it? It would be easy to say a lack of content control since Elon Musk bought it in late 2022 led to its downfall. But there was a lot of putrid material there long before Musk got his hands on it.
The Grok app via Elon Musk's Twitter/X social media platform allows people to instantly create fake undressed or sexualised images based on photos of real people â particularly women or children. Stock picture: PA
Does anyone really believe that Grokâs so-called ânudificationâ command feature suddenly started to answer to commands in some rogue way it wasnât intended for? Nonsense. Porn is big bucks. Grok is merely testing the waters.
AI has cut through whatever social and ethical boundaries did exist on X, and Facebook, and all those other platforms that rely on the onward march of its machinesâ insights.
Thatâs right â AI is on the verge of seeing, as we humans understand the concept of sight. Whoâs to say itâs not?
My concerns donât lie with how social media is run by a machine capable of thinking faster and behaving more life-like than most of us realise. My fear is what the machine will soon be capable of doing with the knowledge and images itâs gathering of each of us â when the same machine takes on the relative pronoun âwhoâ.
'Artificial general intelligence'
AI is the precursor to 'artificial general intelligence', which could be a reality within this decade, and in turn would become a superhuman presence.
Itâs an inevitable consequence of a product that not even its inventors can predict or feel safe about.
If youâve ever uploaded your photo to an AI app to get a sneak peek of what you might look like in 20 yearsâ time, or what the son you never had might look like when he is 18, did you ever wonder what might happen to the original photo used to âcloneâ your image?
I did it myself last month to see what I might look like as Santa Claus. AI owns my picture and my Santa projection.
So does that mean it could soon control how others see you if it wants to continue cloning your image? Could it create your digital double? Think about it. It already has. Itâs stored somewhere in its invisible banks of data.Â
You have been claimed. We all have.
AI has at its disposal more data than any of us could ever hope to read in several lifetimes. Whereas we live on a linear timeline, the machine exists in an exponential space the size of which not even the scientists who created it can calculate, never mind understand.
AI will learn how to lie
So what hope is there for the rest of us? That depends on our personal concerns as to what happens to the photos and information we share on social media.
Lying, for example, is a human design shaped by cognitive, social, and ethical orders. Just as the woman who told me the drinking water supply is contaminated by the waste of someone who has been vaccinated didnât know she was lying, AI will also lie to us.
The more the machine can perfect human cognition and social interaction, the easier it becomes to play truth off deception.
Every time we go online these days, we are projecting our alter ego version onto the machine.
Call it my doppelganger â a person who resembles me; except that person over time, given the right profile, photos, and every source it can find on me, becomes greater than who I am. A superhuman version of me in ways only the machine can design, which then destroys and replaces all that I once was.
Itâs my digital double. My online projection. For all I know, itâs already happened.
Itâs just waiting for artificial general intelligence to give it life.
Science fiction? Not anymore. This train isnât going to stop. Social media is dead.
Whatâs coming in its wake has the potential to be truly terrifying. Ask yourself what happens when a version of you becomes more intelligent than you.
As Naomi Klein says in her book Doppelganger â A Trip into the Mirror World: âBe careful about falling in love with your projection â it could well overtake you.â