While tech giants are spearheading the latest innovations in Artifical Intelligence, some experts are warning that while we don’t need to fear robots, we should fear entities capable of beating us at out our game, writes Rita de Brún.
Artificial intelligence (AI) could be potentially more dangerous than nukes. So tweeted Elon Musk, the serial entrepreneur and founder of PayPal, Tesla Motors and the spaceflight company Space X.
Worth reading Superintelligence by Bostrom. We need to be super careful with AI. Potentially more dangerous than nukes.— Elon Musk (@elonmusk) August 3, 2014
He was, of course, referring to the technology that’s defined as being capable of matching or exceeding human performance in most areas.
Another who fundamentally believes that AI is humanity’s biggest existential threat is Dr Stuart Armstrong, James Martin Research Fellow at the Future of Humanity Institute at Oxford University.
“We should not fear robots; we should fear entities that are capable of beating us at our own game,” he told the Irish Examiner. For Armstrong, it’s the ‘intelligence’ part of ‘artificial intelligence’ that we have to fear: “If machines can outthink us and out-compete us in fields of human domination, such as economics, politics, science and propaganda; we have a serious problem.”
In his book, Smarter than Us - The Rise of Machine Intelligence, Armstrong uses an imaginary example of a machine that interprets an order to ‘cure cancer’ by destroying all human life, to make the point that there may be a gap between what an AI understands and what it is motivated to do.
‘No matter how poorly phrased the programmed initial goals, the AI is motivated to obey them, as its current requirements are its motivations and even if they are ‘wrong’ motivations from our perspective, the AI will only be motivated to change its motivations if its motivations demand it,’ he writes.
Armstrong drops the bombshell that if programmed poorly, AIs may see their controllers as just another obstacle to manipulate in order to achieve their goals. And there’s worse: AI’s may not always answer humans truthfully.
AIs with wrong goals will lie because they know that controllers will try and stop them from achieving those goals if they reveal them.
It’s because the pressure is building to create safe, intelligent machines, that Stephen Hawking and Elon Musk joined eminent scientists, high-profile tech-investors, AI researchers, in signing an open letter urging that ‘the potential pitfalls’ of AI be avoided.
Search and translation aside, Google is also exceedingly interested in robotics; an interest that over the last couple of years saw the tech giant snapping up numerous companies including: Boston Dynamics, Nest Labs, Bot & Dolly, Holmni, Meka Robotics, Schaft, Redwood Robotics, and DeepMind Technologies.
When DeepMind — a start-up that is working to make technology think like humans — was in the process of being bought by Google last year, it was widely reported that the former’s founders insisted during negotiations that an ethics board be created to ensure the artificial intelligence technology would not be abused.
The use of the word ‘insisted’ suggested that Google had to be convinced that this was the right course of action.
While Google is doing no evil, IBM is doing no harm. Marie Wallace, IBM’s analytics strategist told the Irish Examiner that this is a focus of the the analytics she is involved with at IBM. The Ted-talker is a positive force in the face of growing calls for caution as to the potential risks associated with the future evolution of artificial intelligence.
“While I believe we should be cautious and thoughtful about every new piece of innovation, I don’t have the same negative attitude about artificial intelligence as those expressed by some pundits,” she says.
Adamant that she has reservations linked with the evolution of technology in general rather than to artificial intelligence specifically, she concedes: “I appreciate the concerns around AI that have been expressed by some and agree that the necessary structures to keep technology safe need to be put in place.“
Pressed as to what some of those structures might be she replies: “I don’t think there are enough humanists, socialists and anthropologists working in the tech space. There should be many more, to bring an increased human dimension to technology.”
Wallace believes that many oversell where AI will be in the next 20 or 30 years. “Analytics systems can be very smart but only with the input of human beings,” she says.
“We are a long way from a time when human involvement will not be needed. I don’t see humans being out of the equation anytime soon, and more specifically, I don’t see clinicians being excluded from the equation anytime over the next five to 10 years. As for what the technology will be capable of 50 years from now, I can’t say.”
Elon Musk isn’t sure either, but he says that in developing artificial intelligence we might be summoning up the demon. “We can’t quite know what will happen if a machine exceeds our own intelligence. We can’t know if we will be infinitely helped by it, or ignored by it and sidelined, or conceivably destroyed by it.”
Nobody can, of course. But we all have opinions. Stephen Hawking says the technology could “end the human race.” Bill Gates admits that he’s concerned about super intelligence and that he doesn’t understand why some people are not.
He also describes Ray Kurzweil as ‘the best person I know at predicting the future of AI.” Ray Kurzweil, meanwhile, predicts that by 2045 computers will be more intelligent than human beings.
Stuart Armstrong says AI is the single greatest existential threat there is and that heavy investment in research is urgently required, so we can solve the ethics and mathematics problems required to safely programme these machines. He also says that current expertise is far from adequate for that task.
Elon Musk recently put his money put where his mouth is by donating $10m to the Future of Life Institute “to help keep AI beneficial to society.” More research is needed to ensure that goal is reached. Let the race begin.
HISTORY OF AI
Given all the shouting that’s going on about the potential dangers posed by artificial intelligence, it’s easy to imagine AIs are figments of some futurists imagination, when the truth is these machines have been making waves for decades.
As far back as 1997, AI was at the core of IBM’s Deep Blue computer; a machine best known for beating chess player Garry Kasparov at his own game.
In doing so, it made reality of a prediction made 7 years earlier by the futurist and inventor, turned Google director of engineering, Ray Kurzweil: that a computer would defeat a world chess champion by 1998.
Today, artificial intelligence is all around us. It fires Siri, iPhone’s voice-recognition technology and Google’s driverless cars. It’s the modus operandi behind legged locomotion, question answering systems, mapping, translation and speech recognition devices.
It’s at the heart of IBM’s language-fluent Watson; the machine that beat human competitors on the US TV game show Jeopardy, then scooped the $ 1 million dollar prize.
Perhaps it’s because Watson demonstrated a superb command of language subtleties on that show, (which culminated in it describing a long, tiresome speech delivered by a frothy pie topping, as a meringue harangue ), that the cognitive technology is better known for that victory than it is for its current contribution to society: the revolutionising of genomics and the field of personalised medicine.
Artificial intelligence is ensconced in the weapon technology that is the grenade and machine-gun wielding sentry — Samsung SGR-1 robot – the machin that in 2010, South Korea was deployed in the Demilitarised Zone ( DMZ ) which separates it from North Korea.
It’s at the essence of the Campaign to Stop Killer Robots. Mary Wareham, advocacy director at Human Rights Watch and spokeswoman for the Campaign, told the Irish Examiner that weapons systems that would select targets and use force without further human intervention, are more likely to be created in the immediate future than autonomous weapons systems with sophisticated artificial intelligence.
“Our principle concern is with the trend towards greater autonomy in weapons systems and the possibility that human control will be removed altogether,” she says.
Artificial intelligence was also at the heart of UN representative, Professor Christof Heyns’ report to the Human Rights Council, when he called for a ban on ‘killer robots,’ with the comment: “Machines lack morality and mortality and as a result should not have life and death powers over humans.”
It was the method by which, in June of last year, a computer programme called Eugene Goostman passed the Turing test, by tricking 33 pc of a panel of judges into believing during the course of a 5 minute typed message-style conversation, that they were communicating with a 13 year old Ukranian boy, rather than a machine.
It’s at the very core of the technology behind the recent upgrading of astrophysicist Stephen Hawking’s voice computer and the method by which he was able to tell the BBC: “The development of full AI could spell the end of the human race…Artificial intelligence would take off on its own, and redesign itself at an ever-increasing rate…Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”
So central to our very existence is artificial intelligence that billions of dollars are being invested in robotics by Google, along with other tech giants such as IBM, Facebook, Microsoft and Baidu.
So great is the growth in this sector that the Boston Consulting Group predicts that worldwide spending is expected to jump from just over $15 billion in 2010 to about $67 billion by 2025.
Meanwhile, McKinsey estimates that by 2015, intelligent technologies will create between $50 and $100 trillion dollars of value.
But from the evidence of the films below, maybe we needn’t worry just yet about robots taking over the world.
* 2001 Space Odyssey
* Ex Machina
© Irish Examiner Ltd. All rights reserved