Google is on a spending spree, buying up firms that could give it the edge in areas such as robotics and AI, write Dan Buckley and Mark Blinch .
Google owns the present — now it wants the future. Backed by a $58bn cash war chest, Google has started to look far beyond traditional computing devices and is snapping up individual pieces to put together a “real life internet”, a behemoth that will dominate and dictate all aspects of our daily lives — but controlling the software and hardware.
The company is betting its future on the fact that one day cars, fridges, mobile phones, computers and home devices will talk with each other, generating insights that can be converted into data — opening up advertising opportunities.
In recent months, it has spent billions on an acquisition spree — but not buying just any old start-ups. No, the company had its eyes on companies in particular tech sectors: robotics and artificial intelligence.
It’s a moonshot for the company, a term used to refer to the kind of long-term, crazy projects for which Google has become famous: driverless cars, the wearable Google Glass computer, weather balloons to bring wireless access to rural communities. In a show of commitment, Google placed one of its top executives — Andy Rubin, who built the successful Android operating system — at the helm.
Google recently disclosed it had acquired seven robotics companies in the preceding six months; it has since scooped up more. What, precisely, Google is doing with all these acquisitions remains a mystery. There are a handful of robotic arm companies, leading to early speculation that Google might get into manufacturing, in support of Chromecast, Google Glass and Google-owned Motorola.
Here a quick look at some of the companies it has snapped up:
This one comes first alphabetically, but in a lot of ways, it’s the odd company out. Autofuss is actually an advertising and design firm that has previously worked with a number of high-profile companies, including Google acquisition Bot & Dolly, as well as Google itself.
The company is a self-described “interdisciplinary design studio”, working in motion design, live action, animation and, yes, robotics.
The proverbial big dog on campus, this was the purchase that let the tech world know just how serious Google was getting about robotics. The MIT spinoff has made a name for itself in the world of engineering and in the larger blogosphere for a series of impressive DARPA-funded military robots.
Chief among these robots is the nightmarish four-legged Big Dog, which is capable of traversing rough terrain, and which is generally impossible to kick over. It can run up to four miles an hour, carry up to 400 pounds and has a range of around 20 miles. And now it’s owned by Google.
Bot & Dolly
Autofuss’s ‘sister company’ is located in the same San Francisco warehouse. Bot & Dolly is in the business of making giant robot arms for commercials and movies. Perhaps you saw the film Gravity? Bot & Dolly made the camera-grasping arms that helped make the film’s zero gravity a convincing reality.
Like a number of the other acquisitions, Holomni went radio silent after being bought by Google, noting on its website that it’s retiring from external product development. Before being welcomed into the Mountain View tent, the company was best known for creating wheels capable of omnidirectional motion. This is how Google’s robots will most likely roll.
Industrial Perception’s self-described mission is, “Providing robots with the skills they’ll need to succeed in the economy of tomorrow”. More simply put, the company makes robotic arms for loading cargo. It goes a bit deeper than that, too, as Industrial Perception also creates 3D vision systems that are capable of distinguishing shapes, helping the bots to avoid collisions.
Yet another MIT spin-off, this San Francisco company makes, arguably, the friendliest-looking robots of the bunch. Meka creates all sorts of humanoid robotics components, from torsos to hands to the adorably terrifying S2 head. The preceding video should give you a pretty good breakdown of all that Meka makes.
A joint venture of Meka and Willow Garage, another Bay Area–based robotics company, Redwood Robotics is yet another entrant in the robotics arm game, though this one is more interested in making inexpensive and safe models designed for use in your home. A chicken in every pot and a robotic arm to help you scrub those hard-to-reach bits.
A Japanese robotics company from Tokyo University, Schaft is also into humanoid robotics, though the company’s ’bots are all about search and rescue. The company got high marks in the recent DARPA Robotics Challenge for its 5-foot tall, 209-pound robot that looks a bit like a miniature version of the Imperial Walker from The Empire Strikes Back. It’s capable of some pretty impressive feats, including driving by itself, opening a door and climbing a ladder.
And all those acquisitions are just for starters. Last month, Google announced it is buying smart thermostat start-up Nest in a deal valued at $3.2bn.
The big-ticket buy continues a move by the internet titan into consumer electronics hardware, adding smartphone-synched thermostats to its Motorola Mobility smartphones; Nexus mobile devices and Chromecast.
As well as gaining access to million of homes, the purchase of Nest also brings some of the smartest minds in the world back into the fold.
Nest co-founder Tony Fadell is a former senior vice president of the Apple division behind iPods and iPhones. Fellow co-founder Matt Rogers was a lead iPod software engineer working with Fadell at Apple. Nest’s vice president of technology Yoky Matsuoka was once head of innovations at Google.
Inspiration for Nest came when Fadell was building an environmentally-friendly home in Northern California and discovered that thermostat technology was stuck in a bygone era. Fadell pulled together a team to bring the thermostat into the mobile Internet age.
The sleek, disk-shaped thermostat is controlled by turning an outer ring. A black display screen showing the temperature turns blue to indicate cooling or red to show rooms are being heated.
Machine learning built into thermostats lets them adapt to patterns in homes within a week of regular use. The more users adjust their Nest thermostats, the more precisely the devices learn preferred comfort levels in homes.
Sensors in the thermostat assess whether lights are on or there is movement, determining when people are away and then shifting to energy-saving settings.
A green leaf appears on-screen to prompt users to save energy and money by altering their usual thermometer setting by a barely noticeable degree.
Learning thermostats also tell people how long it will take to get rooms to desired temperatures, letting them assess whether they will be home long enough to justify the process.
Nest thermostats have wi-fi connectivity to link up to the internet, and a free smartphone application lets people manage home climates from afar, or mine data about energy used for heating or cooling.
Nest launched in late 2011 with its smart thermostat and later added a carbon dioxide detector to its line.
“This means firmly and clearly that Google is getting into connected homes,” Forrester analyst Frank Gillett said of the acquisition.
“It once again demonstrates that the industry is going in the direction of Apple; deciding it is darn important to control the hardware.”
Smart homes, featuring intelligent objects such as door locks, lamps, refrigerators and washing machines, were among the hot trends at an international Consumer Electronics Show extravaganza last month in Las Vegas.
Getting back to the future, Google also announced it has acquired DeepMind — a British artificial intelligence start-up that creates machine learning algorithms for a wide range of applications, from commerce to gaming.
DeepMind’s technology could enable robots to communicate with one another, or with humans. The acquisition reportedly set Google back up to $500m.
The company has become increasingly focused on artificial intelligence in recent years. In 2013 it announced a partnership with NASA and several universities to launch the Quantum Artificial Intelligence Lab.
In a recent earnings call, Facebook CEO Mark Zuckerberg said he is interested in artificial intelligence that will help Facebook better understand users. Such technology could help Facebook understand the objects inside users’ photographs, such as handbags or food, which could lead to more targeted advertising.
The idea of creating smarter computers based on the brain has been around for decades as scientists have debated the best path to artificial intelligence. The approach has seen a resurgence in recent years thanks to far superior computing processors and advances in computer-learning methodologies.
One of the most popular technologies in this area involves software that can train itself to classify objects as varied as animals, syllables and inanimateobjects.
However, many experts claim Google is still a long way from achieving the processing scale of a human brain, let alone understanding how it works.
“I’m glad to hear the news about Google’s acquisition of DeepMind, since it will attract more attention to this field,” noted Pei Wang, an artificial general intelligence researcher at Temple University. “However, in my opinion, reinforcement learning and deep learning are not enough to give us ‘thinking machines’.”
But while thinking machines may even be beyond Google’s deep pockets, the acquisition will benefit many of the company’s operations including:
Google users upload more than 100 hours of new video to YouTube every minute. The company scans for copyright violations, but now it has the potential to recognise people, objects, brands, products, places, and events. The technology could also better curate the millions of videos on YouTube, making suggestions and related videos much smarter.
Speech recognition and translation
Google Translate is already well regarded, but new technology can take it to another level — even allowing travellers to use their smart phone to help translate when speaking to someone in another language in real time.
Better search facilities
Google built its empire on its search facilities, but it’s often too general and not specific enough. Deep-learning technology can better understand what people are searching for, producing better results.
Deep learning and neural networks excel at pattern recognition, whether that’s pixels in an image or behaviours exhibited by users’ accounts or devices. Google could use deep-learning technologies to protect accounts and improve users’ trust in the company
Greater use of face recognition in videos and photographs as well as recognising places and events — again opening up different ad revenue streams.
The bulk of Google’s revenue comes from online advertising, where deep-learning technologies could be applied to targeting users even more precisely with ads. But Google also wants to sell users movies, music, books, and apps via Google Play.
According to an article in the Financial Times, Shane Legg, one of the founders of DeepMind, has predicted that “human level” artificial general intelligence (AGI) will arrive by 2030. By 2020, he forecasts the creation of “a system with basic vision, basic sound processing, basic movement control, and basic language abilities, with all of these things being essentially learnt rather than preprogrammed”.
However, creating artificial intelligence that mimics the human brain may be even beyond Google — for now.
“If you want to develop systems that have the behavioural flexibility of a human being, there is no option other than to try to recreate the brain’s dynamics inside a computer,” Simon Stringer, director of the Oxford Centre for Theoretical Neuroscience and Artificial Intelligence, one of the leading laboratories in the field, told the Financial Times.
Within the next decade, he said, “I would be very pleased if we could develop a system that has the intelligence of a rat.”
© Irish Examiner Ltd. All rights reserved