RISE OF THE MACHINES: Could artificial intelligence kill us off?

We still don’t understand the real thing, so we’re right to worry about the rise of AI, writes Bella Bathurst

RISE OF THE MACHINES: Could artificial intelligence kill us off?

You’re awake, you’re sentient, you might even be upright. You’re not comatose or dead, and it’s reasonable to assume that if you were on some kind of powerful mind-altering drug then you wouldn’t be reading this.

The point is, you’re here, and you’re alive, so therefore you’re conscious. You know you are.

Ok then, since you’re conscious and I’m conscious and everyone else is conscious, go ahead. Define it. What is consciousness? Where does it reside? Does it belong to the mind or the body, or does it exist outside both? Is consciousness part of our souls, or does it live in the things we create – our art, our music, our cities and wars?

Whether nightmare or nirvana, the idea of the Singularity blends Pierre Teilhard de Chardin’s notion of a single great consciousness with modern artificial intelligence.

Could it be mechanical or electronic, and, if so, what makes it operate? Most pressingly of all, is it possible we have now made for ourselves a new kind of consciousness, one which exists independently? If so, then what the hell have we got ourselves into?

The search for a definition of consciousness must lay claim to be the world’s longest-running detective story. We’ve had our best minds on it ever since we developed brains big enough to ask questions and, still, we seem to be stumped.

Plato and Aristotle couldn’t fix it; Kant, Hume and Locke tried different angles; Schroedinger, Heisenberg and Einstein remained in awe before it. None of them came up with the final formula, the definitive, nailed-it-forever, silences-all-critics answer.

Lately though, the hunt seems to have changed gear. Despite big differences about how best to conduct the search and where to look, several of the most persistent sleuths have found themselves disconcertingly close to agreement. No-one is yet at the stage when they are ready to call a press conference and announce to the world they have finally apprehended the suspect, but they have at least begun to converge on these two leads: the Omega Point and the Singularity.

Pierre Teilhard de Chardin is an improbable prophet, partly because he’s dead, and partly because he’s still associated with a famous palaeontological fraud. Born the fourth of 11 children near Clermont-Ferrand in France in 1881, de Chardin developed two interests when young: God and fossils. Aged 18, he entered the Jesuit order as a novice before completing his studies in philosophy and maths.

In 1912, he became part of the team working on Piltdown Man, the “discovery” of bones in East Sussex which were claimed to belong to an early hominid and thus to provide the missing evolutionary link between apes and humans.

Nearly 40 years later, the find was exposed as a hoax. Team leader Charles Dawson had combined the skull of a modern human with the jaw of an orang-utan. Whether or not de Chardin had actually participated in the fraud – his contribution of a missing molar to the skull was a major supporting piece of evidence – his archaeological work was interrupted by the outbreak of war.

When he resumed in 1918, he moved the focus of his studies sideways into geology and began teaching in China.

For the rest of his life, he combined writing, spiritual practice, teaching and adventure. By the time of his death in 1955 he’d driven a car across the whole of Eurasia and had a long but supposedly unconsummated relationship with an American sculptor called Lucile Swan.

But it was neither his science nor his love life that brought him into conflict with the church. It was his attempt to synthesise evolution and Christianity, and his views on original sin. The sin bit is still clouded (no-one knows whether he was in favour of more or less) but de Chardin’s basic theory was that as science, humanity and civilisation develop, there will ultimately come a point when the noosphere – the sphere of sentient thought – evolves until it joins with itself, human consciousness unifies, and ... and something wonderful happens.

Artificial intelligence is explored in films such as Ex Machina (above) and Blade Runner.

“At that moment of ultimate synthesis, the internal spark of consciousness that evolution has slowly banked into a roaring fire will finally consume the universe itself,” he wrote in Let Me Explain, a collection of his thoughts published in 1970.

“Our ancient itch to flee this woeful orb will finally be satisfied as the immense expanse of cosmic matter collapses like some mathematician’s hypercube into absolute spirit.”

If the noosphere is to reach this exciting finale, then all the fractured layers of human thought must first be conjoined by a single disembodied intelligence. De Chardin envisaged that disembodied intelligence as something directed by us, but separate – an intelligence which now just happens to look a lot like the internet. The upside to noosphere theory is not only that it appears to unify science and theology, but that it also takes account of artificial intelligence. The downside is that, even allowing for mistranslation, de Chardin’s writings are a stiff uphill climb through thickets of abstraction. Despite this handicap, it seems he’s finally found his moment.

As a formally trained scientist in the 1940s, de Chardin took evolutionary theory as a given. The Catholic Church did not. His masterwork, The Phenomenon of Man – in which he argued that the next stage of evolution would be the point at which everything in the cosmos, all science, all thought, all energy, all matter, began to spiral towards an Omega Point of divine unification – did not please the Vatican, which banned him from publishing during his lifetime and exiled him from France.

At the time de Chardin was writing in the 1940s, a single global intelligence seemed both far-fetched and far-distant. But his theories have since been hauled into the 21st century by other thinkers and other disciplines.

The physicists are all busy looking for a grand unified Theory of Everything while within biology, de Chardin’s ideas have found their most popular form in variants of the Gaia theory and the work of James Lovelock. After all, if the Earth functions healthily as a single organism, then surely the human consciousness within it must also function collectively.

And then there’s the Singularitarians, who believe that there will come a point in the not-at-all distant future when artificial intelligence finally outstrips human intelligence and computers become independently capable of designing their own successors.

Different thinkers suggest different dates for this. Author and scientist Vernor Vinge suggested it’s just around the corner in 2030, Singularity fans have come up with a median estimate around 2040, and the manufacturers of drone toothbrushes (which spy on your brushstrokes and sneak the data to your dentist) evidently think it was exceeded some time ago.

Could computers simply eliminate the need for humankind, and if they were super-intelligent, what form would that intelligence take? The assumption at present is that any alternative technology, whether originally designed by us or not, would automatically be in competition with humans. In other words it may exist through our design but it could soon design us out of existence.

So. If all of these things (quantum physics, philosophy, government, capitalism) are indeed beginning to converge, then are we reaching de Chardin’s tipping-point, and if we have, then what’s on the other side? Was he really on to something, or just another visionary millennial charlatan? And – most tricky of all – how are we supposed to know if our consciousness is changing when we don’t even know what it is?

Rendered down, theories on consciousness divide into three. There’s the rational/scientific approach, there’s the spiritual/mystical approach, and there’s the point where the two views intersect.

The rational/scientific approach holds that consciousness is some kind of by-product of existence, and existence is part of the universe we belong to. Since consciousness is within this big but comprehensible universe, then one day we’ll be able to find and measure it.

There will come a time not so far away when we finally invent an instrument – a probe, a spectrometer, a set of scales – with which we can locate that consciousness, pin it down and describe it like we can describe longitude or internal combustion.

Maybe we’ll find it in the brain, maybe we’ll find it close to the heart, maybe – like the old theory – when we finally come across it we’ll discover it weighs exactly 21 grams and leaves the body exactly at the point of death. Either way, we’ll definitely find it.

The spiritual/mystical approach says that any attempt to find a physical version of consciousness is hilariously perverse, since it starts from completely the wrong end. Consciousness isn’t a product of the universe, the universe is a product of consciousness. We are all within consciousness and we are all indivisible from each other.

Every single one of all us living billions come from consciousness and are capable of comprehending it, whether we be an adult, a child or a mouse. In fact, the child and the mouse probably have a better grasp of consciousness than adults do because adults have far too much rational stuff getting in the way. Consciousness is better understood in an instant than it is in a lifetime, and all a child spends its life learning is the art of forgetting.

“ONCE HUMANS DEVELOP ARTIFICIAL INTELLIGENCE, IT WOULD TAKE OFF ON ITS OWN AND REDESIGN ITSELF AT AN EVER-INCREASING RATE. HUMANS, WHO ARE LIMITED BY SLOW BIOLOGICAL EVOLUTION, COULDN’T COMPETE AND WOULD BE SUPERSEDED”

Theoretical physicist Stephen Hawking has a personal insight into machine consciousness – his communication technology recognises the patterning in his thoughts. Picture: David Parry/PA

This way round, you do not have to be clever to understand consciousness. In fact, cleverness can be an active disadvantage. Most of us can’t get our heads around consciousness because our minds get in the way, and yet those who devote themselves to the search for it – the philosophers, the theologians, the astrophysicists, the neurochemists – tend to be clever. Very clever, and/or very wise.

Wise enough to be multi-disciplinarians, to synthesise both the scientific and the mystical, and to honour the place of both.

In which case, this article should be read with a proviso: writing about consciousness is pointless. Completely ridiculous. It’s like trying to find an accountancy of love or a taxonomy of song; the more words you expend on it, the further you travel away. Consciousness is not a three-dimensional phenomenon but a multi-dimensional one, and since writing is a three-dimensional solution, it has to be the wrong tool for the job. You’ve got a much better chance of comprehending consciousness by staring out of the window or listening to music than you have by reading about it.

Still, philosophically, the scientific and the mystical appear very far apart. From the scientific point of view, consciousness is just a problem waiting to be solved. The mystical way round, there are no problems any more than there is space, time, solutions or galaxies, nothing undifferentiated nor corporeal. There’s only a single atomless soul.

In all its versions, that soul – for want of a bigger word – is what most of us spend our lives searching for, whether that be through God or meditation or the someone to complete you. Most of us, whether we acknowledge it or not, are looking for our way back to that single self, and since most adults have long since lost the straight route we search instead down the side-roads: Near-death experiences, sex, drink, drugs, early ’90s German trance; anything that appears to shorten the gap between what we can see and what we sense is there.

Within the scientific group, it’s barely worth saying there’s a world of difference between Daniel C Dennett’s hardcore ultra-Darwinist position, and the group of individuals led by Albert Hofmann who in the 1930s first synthesised LSD and found in its visions the key to the doors of perception. But over the past few decades, there’s been a notable shrinkage in the distance between the mystical and scientific positions. It used to be that you were either/or. Either you were an ardent rationalist, or you were an old acid-casualty boring on about bad trips.

Now, if anything, it’s the scientists who seem to be wandering round with their arms extended, murmuring, “wow, man, it’s all so, like... quantum.” The great thing is watching science admit it doesn’t know things, and that what it is finding at the end of their vastly improved instruments and calculators was not more certainty, but more uncertainty. There are places where the conventional rules do not apply, and one set of rules appears to cancel another out.

During the past century, the various branches of quantum science have arrived at points which would impress even the trippiest hippy. Newtonian physics disintegrates once past the atomic level. Einstein’s theory of spacetime within General Relativity is partially opposed by Quantum Field Theory.

For every law there is a contradiction, for every stone of solid rational ground there are as many quicksands of inconsistency. Particles don’t always behave the same way, light can be both particle or wave, time-travel is just waiting to happen.

And further. All possible probabilities lead to parallel universes; we exist at all times in a multiverse, not a universe; within the subatomic world there can be effects without causes; the behaviour of an atom on one side of the world influences the behaviour of an atom on the other side of the world; time bends; complexity is founded on simplicity and everything is actually an endless pattern-repeat, like cosmic wallpaper. Plus of course Schroedinger and his poor half-dead cat: the act of observation changes both the observer and the nature of the thing observed.

The convergence between what the scientists are now saying and what the mystics have been banging on about for four millennia or so is complemented by an increasing parity within the world of artificial intelligence.

The concept of the Deus Ex Machina, the God from the Machine, has existed since the Greeks invented tragedy, but the notion of an ultimate technological Singularity was first given expression in the 1850 by the writer Samuel Butler in his novel Erewhon, and then both a name and scientific plausibility in the 1950s by the mathematician John von Neumann.

Even so, the issue with Singularity is not so much the point at which it might or might not occur, but what its disciples think will happen on the other side.

If, as the Nobel theoretical physicist Stephen Hawking suggests, computing capacity is even now growing at such a rate we can barely control it: “One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand.”

The technology Hawking requires in order to allow him to communicate gives him unusual insight into the issue. His upgraded new computerised voice system is designed by the British company Swiftkey and uses Intel technology designed to “read” or recognise the patterning in his thoughts.

Hawking’s first thumb-operated system allowed him to communicate at 15 words a minute (ordinary speech is about 150 words per minute), but the degeneration in his remaining active muscles meant that by 2011, he could only spell out around two words in that time.

The Intel team ended up designing something which comes close to reading Hawking’s thoughts. It learns the way he likes to construct his thoughts and adapts itself to his habits of “speech” and now needs only one or two letters in a word or phrase before predicting the rest. It even factors in his grammatical fastidiousness and his resistance to new technologies.

As Hawking points out, all this mind-reading technology is still relatively primitive, “but I think the development of full artificial intelligence could spell the end of the human race. Once humans develop artificial intelligence, it would take off on its own and redesign itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete and would be superseded.”

The best test for the claim is Moore’s Law. In 1965 Gordon E Moore, the man who started Intel and helped Hawking find his voice, suggested that computing power and complexity would double every two years. Within the industry, the law has succeeded partly as a self-fulfilling prophecy, a goal which technology companies deliberately strive for. If anything, the time-limit between doublings is now more like 18 months. The end-point (say, 2040) is supposed to be the point at which computers outstrip us and become entirely self-replicating.

Some proponents believe the post-Singularity future will be all the better for it. Admittedly they’re a broad church, including everyone from fans of cryogenics to transhumanists and mind-uploaders (those who believe that the entire content of a human brain, every dusting of brainfluff and binload of thought-spam) could someday be uploaded on to a separate hard drive, thus rendering a real mortal body obsolete.

The prophet and leader of the benign-singularity camp is Raymond Kurzweil, Google’s director of engineering and long-term advocate for digital immortality. He’s been right about a lot of things before (scanners, text-to-speech software, a computer defeating a human at chess) but he’s also been wrong about a lot too.

Bioengineering has not reduced mortality for cancer and heart disease, and humans turn out not to like buying things from a computer-generated “virtual personality”. Kurzweil is a controversial character who has spent almost as much time trying to ensure his own physical immortality (one of his books on the subject is titled Fantastic Voyage: Live Long Enough to Live Forever) as he has in predicting digital nirvanas.

Curiously enough, it’s those who are closest to the issue who are sounding the loudest alarms. The techies aren’t convinced that handing over so much power to something without either a pulse or a conscience is such a great idea.

Bill Gates recently generated several gigabytes of geek controversy with his caution against placing too much faith in IT.

“I am in the camp that is concerned about super intelligence,” he said in a recent online Q&A session. “First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.” His fellow techie Musk, who has built a fortune and a reputation from taking bets on the future, recently spoke of his concerns over AI. Musk started out with PayPal, now runs electro-car manufacturer Tesla and is sufficiently concerned about the various threats to life on Earth that he’s busy developing a rocket programme too, though as one of his recent tweets put it: “The rumour that I’m building a spaceship to get back to my home planet Mars is totally untrue.” During a recent talk at MIT, he warned: “We should be very careful about AI. If I was to guess what our biggest existential threat is, it’s probably that. The thing with AI is that we’re summoning the demon. You know all those stories where there’s the guy with the pentagram and the holy water and it’s like, yeah, he’s sure he can control the demon. Didn’t work out.” Musk’s fear – and the fear of many of his peers – is that we end up designing something which either goes rogue, or which imprisons us (through something like mass surveillance) or, through taking over the tasks and processes which at present only humans can complete, renders us obsolete.

Last summer, all of those future possibilities got one symbolic step closer. The Enigma code-breaker Alan Turing’s famous test – can a machine demonstrate intelligence indistinguishable from that of a human? – was declared by the Royal Society in London to have been passed. A computer had managed to fool a third of the judges into believing it was “Eugene”, a 13-year-old Ukrainian boy.

Eugene’s supposed age and nationality provided a cover story for the typos and the gaucheness in the conversation, and anything the programme couldn’t understand, it countered with a question. Easy enough to pick holes in Eugene’s performance after the event: in an age of trolls and cyber-spooks aren’t we all used to the idea that print distorts identity, five minutes is too short? – but still.

Those who work most closely with the brain do not lose their wonder at it, though neurosurgeons seem as divided as everyone else about exactly what, or who, keeps the show on the road. The neuroscientist Susan Greenfield describes looking down at a human brain for the first time as a student.

“Well, first of all they smell of formalin. It’s a really horrible smell. It stinks, but it keeps the brain firm while you’re dissecting it so you have to keep a set of gloves in a tupperware box. I remember it vividly — I remember holding it and thinking, ‘God, this was a person’. You can hold it in one hand and if it’s ready for dissection, it’s kind of browny-colour with dried blood vessels, and it looks like a walnut. Two hemispheres like two clenched fists.” She believes that consciousness is not “some disembodied property that floats free. I don’t believe in the the theory of panpsychism – that consciousness is a reducible property of the universe and our brains are like satellite dishes picking it up. I can’t disprove it, but assuming that consciousness is a product of the brain and the body, then it’s inevitable that if the brain is changed then consciousness will change.” Similarly, in his 2014 memoir Do No Harm her friend and erstwhile colleague the neurosurgeon Henry Marsh regards the muddle over what belongs to the mind and what to the brain, “confusing and ultimately a waste of time. It has never seemed a problem to me, only a source of awe, amazement and profound surprise that my consciousness, my very sense of self, the self which feels as free as air, which was trying to read the book but instead was watching the clouds through the high windows, the self which is now writing these words, is in fact the electrochemical chatter of one hundred billion nerve cells.”

Some forms of neurosurgery are better done under local anaesthetic, which means the patient is awake and responding to questions throughout. How strange and how miraculous to spend your working life looking down at a brain within its bony casing whilst holding a conversation with its owner.

Teilhard de Chardin would probably have loved neurosurgery, with his palaeontologist’s mindset and his sense of the span of things. But does taking apart the brain, that living piece of physical origami, really get anyone nearer to knowing what consciousness is? Is it where the self resides, and if so, is that why brain diseases like Alzheimer’s gnaw away at the stuff of the self? If time or disease pulls away someone’s personality, burglarising all the stories that made them them and leaving nothing but a physical body, then has that disease made off with their consciousness too?

Dr Duncan MacDougall believed not just that consciousness and the soul were interchangeable, but that they could both be weighed.

In 1901 MacDougall was treating terminally ill tubercular patients in Massachusetts. Since his patients’ decline towards death followed a relatively predictable trajectory, he decided to test an idea he’d had by placing the beds of six of his sickest patients on scales. He then balanced those scales, sat back, and waited. At the moment of their death, he claimed, they got lighter. Or, as the New York Times put it: “The instant life ceased, the opposite scale pan fell with a suddenness that was astonishing – as if something had been suddenly lifted from the body. Immediately all the usual deductions were made for physical loss of weight, and it was discovered that there was still a full ounce of weight unaccounted for.” This, said Macdougall, was proof that the soul had mass. “The essential thing is that there must be a substance as the basis of continuing personal identity and consciousness, for without space-occupying substance, personality or a continuing conscious ego after bodily death is unthinkable.”

Macdougall tried the same hypothesis on 15 dogs and on several mice. None showed any change in weight, which he claimed was proof that only humans had souls. Since MacDougall’s original sample was small (of the original six patients, two were excluded, two lost even more weight after death and one put it back on, which left only one to uphold his theory) it did not take long for the experiment to be discredited. The 15 unfortunate dogs died under protest, and had been drugged.

Most of Macdougall’s experiments were either daft or cruel. Like thousands before him and thousands afterwards, he snagged himself on two points: one, that which doesn’t have mass cannot exist, and two, the soul must be the same as consciousness. Which is the point at which things start to disintegrate. Faustian stories of soul-selling and -searching are compelling because they suggest that something unquantifiable can be apparated into something real. But there’s a point beyond which even stories can’t reach.

So maybe de Chardin was right about the Omega Point, and maybe he wasn’t. His ideas are gaining traction not so much because of their content but because, starting from a place of faith, he synthesised science, artificial intelligence and divinity.

His advantage was that he was a multidisciplinarian and that he gave the old hope for a better Heaven a catchphrase. But his noosphere can only really work as a point of departure for more questions. He envisaged his point of complexity and convergence as a moment of revelation, a final unified rising towards God. But even if he’s right, we all still have free will. And if there’s going to be a tipping-point towards a new universe, then we should make sure it tips the right way.

“once humans develop artificial intelligence, it would take off on its own and redesign itself at an ever-increasing rate. humans, who are limited by slow biological evolution, couldn’t compete and would be superseded”

More in this section

Click and connect: 100 places in Munster to shop locally this Christmas

#HomeAtHeart

We want to help you to connect with the people you love, but may not see, this Christmas.  Every Saturday, in the weeks leading up to Christmas, we will publish your messages in print and online, starting November 28.

Say it here, in the Irish Examiner.