Digital ‘brain’ learned to spot cats by watching YouTube stills
Computers programmed with algorithms intended to mimic neural connections “learned” to recognise cats after being shown a sampling of YouTube videos, Google fellow Jeff Dean and visiting faculty Andrew Ng said in a blog post.
“Our hypothesis was that it would learn to recognise common objects in those videos,” the researchers said.
“Indeed, to our amusement, one of our artificial neurons learned to respond strongly to pictures of... cats. Remember that this network had never been told what a cat was, nor was it given even a single image labelled as a cat.”
The computer, essentially, discovered for itself what a cat looked like, according to Mr Dean and Mr Ng.
The computations were spread across an “artificial neural network” of 16,000 processors and a billion connections in Google data centres.
The small-scale “newborn brain” was shown YouTube images for a week to see what it would learn.
“It ‘discovered’ what a cat looked like by itself from only unlabeled YouTube stills,” the researchers said.
“That’s what we mean by self-taught learning.”
Google researchers are building a larger model and are working to apply the artificial neural network approach to improve technology for speech recognition and natural language modelling, according to Mr Dean and Mr Ng.