RISE OF THE MACHINES: Could artificial intelligence kill us off?

Youāre awake, youāre sentient, you might even be upright. Youāre not comatose or dead, and itās reasonable to assume that if you were on some kind of powerful mind-altering drug then you wouldnāt be reading this.
The point is, youāre here, and youāre alive, so therefore youāre conscious. You know you are.
Ok then, since youāre conscious and Iām conscious and everyone else is conscious, go ahead. Define it. What is consciousness? Where does it reside? Does it belong to the mind or the body, or does it exist outside both? Is consciousness part of our souls, or does it live in the things we create ā our art, our music, our cities and wars?

Could it be mechanical or electronic, and, if so, what makes it operate? Most pressingly of all, is it possible we have now made for ourselves a new kind of consciousness, one which exists independently? If so, then what the hell have we got ourselves into?
The search for a definition of consciousness must lay claim to be the worldās longest-running detective story. Weāve had our best minds on it ever since we developed brains big enough to ask questions and, still, we seem to be stumped.
Plato and Aristotle couldnāt fix it; Kant, Hume and Locke tried different angles; Schroedinger, Heisenberg and Einstein remained in awe before it. None of them came up with the final formula, the definitive, nailed-it-forever, silences-all-critics answer.
Lately though, the hunt seems to have changed gear. Despite big differences about how best to conduct the search and where to look, several of the most persistent sleuths have found themselves disconcertingly close to agreement. No-one is yet at the stage when they are ready to call a press conference and announce to the world they have finally apprehended the suspect, but they have at least begun to converge on these two leads: the Omega Point and the Singularity.
Pierre Teilhard de Chardin is an improbable prophet, partly because heās dead, and partly because heās still associated with a famous palaeontological fraud. Born the fourth of 11 children near Clermont-Ferrand in France in 1881, de Chardin developed two interests when young: God and fossils. Aged 18, he entered the Jesuit order as a novice before completing his studies in philosophy and maths.
In 1912, he became part of the team working on Piltdown Man, the ādiscoveryā of bones in East Sussex which were claimed to belong to an early hominid and thus to provide the missing evolutionary link between apes and humans.
Nearly 40 years later, the find was exposed as a hoax. Team leader Charles Dawson had combined the skull of a modern human with the jaw of an orang-utan. Whether or not de Chardin had actually participated in the fraud ā his contribution of a missing molar to the skull was a major supporting piece of evidence ā his archaeological work was interrupted by the outbreak of war.
When he resumed in 1918, he moved the focus of his studies sideways into geology and began teaching in China.
For the rest of his life, he combined writing, spiritual practice, teaching and adventure. By the time of his death in 1955 heād driven a car across the whole of Eurasia and had a long but supposedly unconsummated relationship with an American sculptor called Lucile Swan.
But it was neither his science nor his love life that brought him into conflict with the church. It was his attempt to synthesise evolution and Christianity, and his views on original sin. The sin bit is still clouded (no-one knows whether he was in favour of more or less) but de Chardinās basic theory was that as science, humanity and civilisation develop, there will ultimately come a point when the noosphere ā the sphere of sentient thought ā evolves until it joins with itself, human consciousness unifies, and ... and something wonderful happens.

āAt that moment of ultimate synthesis, the internal spark of consciousness that evolution has slowly banked into a roaring fire will finally consume the universe itself,ā he wrote in Let Me Explain, a collection of his thoughts published in 1970.
āOur ancient itch to flee this woeful orb will finally be satisfied as the immense expanse of cosmic matter collapses like some mathematicianās hypercube into absolute spirit.ā
If the noosphere is to reach this exciting finale, then all the fractured layers of human thought must first be conjoined by a single disembodied intelligence. De Chardin envisaged that disembodied intelligence as something directed by us, but separate ā an intelligence which now just happens to look a lot like the internet. The upside to noosphere theory is not only that it appears to unify science and theology, but that it also takes account of artificial intelligence. The downside is that, even allowing for mistranslation, de Chardinās writings are a stiff uphill climb through thickets of abstraction. Despite this handicap, it seems heās finally found his moment.
As a formally trained scientist in the 1940s, de Chardin took evolutionary theory as a given. The Catholic Church did not. His masterwork, The Phenomenon of Man ā in which he argued that the next stage of evolution would be the point at which everything in the cosmos, all science, all thought, all energy, all matter, began to spiral towards an Omega Point of divine unification ā did not please the Vatican, which banned him from publishing during his lifetime and exiled him from France.
At the time de Chardin was writing in the 1940s, a single global intelligence seemed both far-fetched and far-distant. But his theories have since been hauled into the 21st century by other thinkers and other disciplines.
The physicists are all busy looking for a grand unified Theory of Everything while within biology, de Chardinās ideas have found their most popular form in variants of the Gaia theory and the work of James Lovelock. After all, if the Earth functions healthily as a single organism, then surely the human consciousness within it must also function collectively.
And then thereās the Singularitarians, who believe that there will come a point in the not-at-all distant future when artificial intelligence finally outstrips human intelligence and computers become independently capable of designing their own successors.
Different thinkers suggest different dates for this. Author and scientist Vernor Vinge suggested itās just around the corner in 2030, Singularity fans have come up with a median estimate around 2040, and the manufacturers of drone toothbrushes (which spy on your brushstrokes and sneak the data to your dentist) evidently think it was exceeded some time ago.
Could computers simply eliminate the need for humankind, and if they were super-intelligent, what form would that intelligence take? The assumption at present is that any alternative technology, whether originally designed by us or not, would automatically be in competition with humans. In other words it may exist through our design but it could soon design us out of existence.
So. If all of these things (quantum physics, philosophy, government, capitalism) are indeed beginning to converge, then are we reaching de Chardinās tipping-point, and if we have, then whatās on the other side? Was he really on to something, or just another visionary millennial charlatan? And ā most tricky of all ā how are we supposed to know if our consciousness is changing when we donāt even know what it is?
Rendered down, theories on consciousness divide into three. Thereās the rational/scientific approach, thereās the spiritual/mystical approach, and thereās the point where the two views intersect.
The rational/scientific approach holds that consciousness is some kind of by-product of existence, and existence is part of the universe we belong to. Since consciousness is within this big but comprehensible universe, then one day weāll be able to find and measure it.
There will come a time not so far away when we finally invent an instrument ā a probe, a spectrometer, a set of scales ā with which we can locate that consciousness, pin it down and describe it like we can describe longitude or internal combustion.
Maybe weāll find it in the brain, maybe weāll find it close to the heart, maybe ā like the old theory ā when we finally come across it weāll discover it weighs exactly 21 grams and leaves the body exactly at the point of death. Either way, weāll definitely find it.
The spiritual/mystical approach says that any attempt to find a physical version of consciousness is hilariously perverse, since it starts from completely the wrong end. Consciousness isnāt a product of the universe, the universe is a product of consciousness. We are all within consciousness and we are all indivisible from each other.
Every single one of all us living billions come from consciousness and are capable of comprehending it, whether we be an adult, a child or a mouse. In fact, the child and the mouse probably have a better grasp of consciousness than adults do because adults have far too much rational stuff getting in the way. Consciousness is better understood in an instant than it is in a lifetime, and all a child spends its life learning is the art of forgetting.

This way round, you do not have to be clever to understand consciousness. In fact, cleverness can be an active disadvantage. Most of us canāt get our heads around consciousness because our minds get in the way, and yet those who devote themselves to the search for it ā the philosophers, the theologians, the astrophysicists, the neurochemists ā tend to be clever. Very clever, and/or very wise.
Wise enough to be multi-disciplinarians, to synthesise both the scientific and the mystical, and to honour the place of both.
In which case, this article should be read with a proviso: writing about consciousness is pointless. Completely ridiculous. Itās like trying to find an accountancy of love or a taxonomy of song; the more words you expend on it, the further you travel away. Consciousness is not a three-dimensional phenomenon but a multi-dimensional one, and since writing is a three-dimensional solution, it has to be the wrong tool for the job. Youāve got a much better chance of comprehending consciousness by staring out of the window or listening to music than you have by reading about it.
Still, philosophically, the scientific and the mystical appear very far apart. From the scientific point of view, consciousness is just a problem waiting to be solved. The mystical way round, there are no problems any more than there is space, time, solutions or galaxies, nothing undifferentiated nor corporeal. Thereās only a single atomless soul.
In all its versions, that soul ā for want of a bigger word ā is what most of us spend our lives searching for, whether that be through God or meditation or the someone to complete you. Most of us, whether we acknowledge it or not, are looking for our way back to that single self, and since most adults have long since lost the straight route we search instead down the side-roads: Near-death experiences, sex, drink, drugs, early ā90s German trance; anything that appears to shorten the gap between what we can see and what we sense is there.
Within the scientific group, itās barely worth saying thereās a world of difference between Daniel C Dennettās hardcore ultra-Darwinist position, and the group of individuals led by Albert Hofmann who in the 1930s first synthesised LSD and found in its visions the key to the doors of perception. But over the past few decades, thereās been a notable shrinkage in the distance between the mystical and scientific positions. It used to be that you were either/or. Either you were an ardent rationalist, or you were an old acid-casualty boring on about bad trips.
Now, if anything, itās the scientists who seem to be wandering round with their arms extended, murmuring, āwow, man, itās all so, like... quantum.ā The great thing is watching science admit it doesnāt know things, and that what it is finding at the end of their vastly improved instruments and calculators was not more certainty, but more uncertainty. There are places where the conventional rules do not apply, and one set of rules appears to cancel another out.
During the past century, the various branches of quantum science have arrived at points which would impress even the trippiest hippy. Newtonian physics disintegrates once past the atomic level. Einsteinās theory of spacetime within General Relativity is partially opposed by Quantum Field Theory.
For every law there is a contradiction, for every stone of solid rational ground there are as many quicksands of inconsistency. Particles donāt always behave the same way, light can be both particle or wave, time-travel is just waiting to happen.
And further. All possible probabilities lead to parallel universes; we exist at all times in a multiverse, not a universe; within the subatomic world there can be effects without causes; the behaviour of an atom on one side of the world influences the behaviour of an atom on the other side of the world; time bends; complexity is founded on simplicity and everything is actually an endless pattern-repeat, like cosmic wallpaper. Plus of course Schroedinger and his poor half-dead cat: the act of observation changes both the observer and the nature of the thing observed.
The convergence between what the scientists are now saying and what the mystics have been banging on about for four millennia or so is complemented by an increasing parity within the world of artificial intelligence.
The concept of the Deus Ex Machina, the God from the Machine, has existed since the Greeks invented tragedy, but the notion of an ultimate technological Singularity was first given expression in the 1850 by the writer Samuel Butler in his novel Erewhon, and then both a name and scientific plausibility in the 1950s by the mathematician John von Neumann.
Even so, the issue with Singularity is not so much the point at which it might or might not occur, but what its disciples think will happen on the other side.
If, as the Nobel theoretical physicist Stephen Hawking suggests, computing capacity is even now growing at such a rate we can barely control it: āOne can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand.ā
The technology Hawking requires in order to allow him to communicate gives him unusual insight into the issue. His upgraded new computerised voice system is designed by the British company Swiftkey and uses Intel technology designed to āreadā or recognise the patterning in his thoughts.
Hawkingās first thumb-operated system allowed him to communicate at 15 words a minute (ordinary speech is about 150 words per minute), but the degeneration in his remaining active muscles meant that by 2011, he could only spell out around two words in that time.
The Intel team ended up designing something which comes close to reading Hawkingās thoughts. It learns the way he likes to construct his thoughts and adapts itself to his habits of āspeechā and now needs only one or two letters in a word or phrase before predicting the rest. It even factors in his grammatical fastidiousness and his resistance to new technologies.
As Hawking points out, all this mind-reading technology is still relatively primitive, ābut I think the development of full artificial intelligence could spell the end of the human race. Once humans develop artificial intelligence, it would take off on its own and redesign itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldnāt compete and would be superseded.ā
The best test for the claim is Mooreās Law. In 1965 Gordon E Moore, the man who started Intel and helped Hawking find his voice, suggested that computing power and complexity would double every two years. Within the industry, the law has succeeded partly as a self-fulfilling prophecy, a goal which technology companies deliberately strive for. If anything, the time-limit between doublings is now more like 18 months. The end-point (say, 2040) is supposed to be the point at which computers outstrip us and become entirely self-replicating.
Some proponents believe the post-Singularity future will be all the better for it. Admittedly theyāre a broad church, including everyone from fans of cryogenics to transhumanists and mind-uploaders (those who believe that the entire content of a human brain, every dusting of brainfluff and binload of thought-spam) could someday be uploaded on to a separate hard drive, thus rendering a real mortal body obsolete.
The prophet and leader of the benign-singularity camp is Raymond Kurzweil, Googleās director of engineering and long-term advocate for digital immortality. Heās been right about a lot of things before (scanners, text-to-speech software, a computer defeating a human at chess) but heās also been wrong about a lot too.
Bioengineering has not reduced mortality for cancer and heart disease, and humans turn out not to like buying things from a computer-generated āvirtual personalityā. Kurzweil is a controversial character who has spent almost as much time trying to ensure his own physical immortality (one of his books on the subject is titled Fantastic Voyage: Live Long Enough to Live Forever) as he has in predicting digital nirvanas.
Curiously enough, itās those who are closest to the issue who are sounding the loudest alarms. The techies arenāt convinced that handing over so much power to something without either a pulse or a conscience is such a great idea.
Bill Gates recently generated several gigabytes of geek controversy with his caution against placing too much faith in IT.
āI am in the camp that is concerned about super intelligence,ā he said in a recent online Q&A session. āFirst the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and donāt understand why some people are not concerned.ā His fellow techie Musk, who has built a fortune and a reputation from taking bets on the future, recently spoke of his concerns over AI. Musk started out with PayPal, now runs electro-car manufacturer Tesla and is sufficiently concerned about the various threats to life on Earth that heās busy developing a rocket programme too, though as one of his recent tweets put it: āThe rumour that Iām building a spaceship to get back to my home planet Mars is totally untrue.ā During a recent talk at MIT, he warned: āWe should be very careful about AI. If I was to guess what our biggest existential threat is, itās probably that. The thing with AI is that weāre summoning the demon. You know all those stories where thereās the guy with the pentagram and the holy water and itās like, yeah, heās sure he can control the demon. Didnāt work out.ā Muskās fear ā and the fear of many of his peers ā is that we end up designing something which either goes rogue, or which imprisons us (through something like mass surveillance) or, through taking over the tasks and processes which at present only humans can complete, renders us obsolete.
Last summer, all of those future possibilities got one symbolic step closer. The Enigma code-breaker Alan Turingās famous test ā can a machine demonstrate intelligence indistinguishable from that of a human? ā was declared by the Royal Society in London to have been passed. A computer had managed to fool a third of the judges into believing it was āEugeneā, a 13-year-old Ukrainian boy.
Eugeneās supposed age and nationality provided a cover story for the typos and the gaucheness in the conversation, and anything the programme couldnāt understand, it countered with a question. Easy enough to pick holes in Eugeneās performance after the event: in an age of trolls and cyber-spooks arenāt we all used to the idea that print distorts identity, five minutes is too short? ā but still.
Those who work most closely with the brain do not lose their wonder at it, though neurosurgeons seem as divided as everyone else about exactly what, or who, keeps the show on the road. The neuroscientist Susan Greenfield describes looking down at a human brain for the first time as a student.
āWell, first of all they smell of formalin. Itās a really horrible smell. It stinks, but it keeps the brain firm while youāre dissecting it so you have to keep a set of gloves in a tupperware box. I remember it vividly ā I remember holding it and thinking, āGod, this was a personā. You can hold it in one hand and if itās ready for dissection, itās kind of browny-colour with dried blood vessels, and it looks like a walnut. Two hemispheres like two clenched fists.ā She believes that consciousness is not āsome disembodied property that floats free. I donāt believe in the the theory of panpsychism ā that consciousness is a reducible property of the universe and our brains are like satellite dishes picking it up. I canāt disprove it, but assuming that consciousness is a product of the brain and the body, then itās inevitable that if the brain is changed then consciousness will change.ā Similarly, in his 2014 memoir Do No Harm her friend and erstwhile colleague the neurosurgeon Henry Marsh regards the muddle over what belongs to the mind and what to the brain, āconfusing and ultimately a waste of time. It has never seemed a problem to me, only a source of awe, amazement and profound surprise that my consciousness, my very sense of self, the self which feels as free as air, which was trying to read the book but instead was watching the clouds through the high windows, the self which is now writing these words, is in fact the electrochemical chatter of one hundred billion nerve cells.ā
Some forms of neurosurgery are better done under local anaesthetic, which means the patient is awake and responding to questions throughout. How strange and how miraculous to spend your working life looking down at a brain within its bony casing whilst holding a conversation with its owner.
Teilhard de Chardin would probably have loved neurosurgery, with his palaeontologistās mindset and his sense of the span of things. But does taking apart the brain, that living piece of physical origami, really get anyone nearer to knowing what consciousness is? Is it where the self resides, and if so, is that why brain diseases like Alzheimerās gnaw away at the stuff of the self? If time or disease pulls away someoneās personality, burglarising all the stories that made them them and leaving nothing but a physical body, then has that disease made off with their consciousness too?
Dr Duncan MacDougall believed not just that consciousness and the soul were interchangeable, but that they could both be weighed.
In 1901 MacDougall was treating terminally ill tubercular patients in Massachusetts. Since his patientsā decline towards death followed a relatively predictable trajectory, he decided to test an idea heād had by placing the beds of six of his sickest patients on scales. He then balanced those scales, sat back, and waited. At the moment of their death, he claimed, they got lighter. Or, as the New York Times put it: āThe instant life ceased, the opposite scale pan fell with a suddenness that was astonishing ā as if something had been suddenly lifted from the body. Immediately all the usual deductions were made for physical loss of weight, and it was discovered that there was still a full ounce of weight unaccounted for.ā This, said Macdougall, was proof that the soul had mass. āThe essential thing is that there must be a substance as the basis of continuing personal identity and consciousness, for without space-occupying substance, personality or a continuing conscious ego after bodily death is unthinkable.ā
Macdougall tried the same hypothesis on 15 dogs and on several mice. None showed any change in weight, which he claimed was proof that only humans had souls. Since MacDougallās original sample was small (of the original six patients, two were excluded, two lost even more weight after death and one put it back on, which left only one to uphold his theory) it did not take long for the experiment to be discredited. The 15 unfortunate dogs died under protest, and had been drugged.
Most of Macdougallās experiments were either daft or cruel. Like thousands before him and thousands afterwards, he snagged himself on two points: one, that which doesnāt have mass cannot exist, and two, the soul must be the same as consciousness. Which is the point at which things start to disintegrate. Faustian stories of soul-selling and -searching are compelling because they suggest that something unquantifiable can be apparated into something real. But thereās a point beyond which even stories canāt reach.
So maybe de Chardin was right about the Omega Point, and maybe he wasnāt. His ideas are gaining traction not so much because of their content but because, starting from a place of faith, he synthesised science, artificial intelligence and divinity.
His advantage was that he was a multidisciplinarian and that he gave the old hope for a better Heaven a catchphrase. But his noosphere can only really work as a point of departure for more questions. He envisaged his point of complexity and convergence as a moment of revelation, a final unified rising towards God. But even if heās right, we all still have free will. And if thereās going to be a tipping-point towards a new universe, then we should make sure it tips the right way.
āonce humans develop artificial intelligence, it would take off on its own and redesign itself at an ever-increasing rate. humans, who are limited by slow biological evolution, couldnāt compete and would be supersededā