More than four million photographs shot for Life magazine are now available for anyone to look through – tagged by Google’s computer vision algorithm to make searching through the vast archive easier.
Life Tags is one of three experiments – all of which use AI algorithms in the background – to be shared by Google and held up as an example of what can be achieved by using artificial intelligence and machine learning.
It combines the company’s AI know-how with the desire to use technology for the benefit of culture, something explored at the Arts & Culture Lab in Paris.
Astronauts, laughing babies, dogs and more are all tagged.
Life magazine, which was published between 1936 and 1972, only published about 5% of the millions of pictures which were shot for its pages. Now four million of those images are searchable.
“Our experiment brings the iconic magazine to life. Browse, search and rate images corresponding to thousands of labels as funny, right or wrong,” Google explains on its blog.
Pictures have been classified using Google’s Image Content-based Annotation (ICA) algorithm to generate labels based on image pixels – the same tech used in Google photo search, meaning the database is searchable and cross-referenced.
A picture of the Queen on her wedding day by William J Sumits crops up automatically in “England”, “Royal Family”, “Great Britain”, “Marriages” and “1940s”.
But it goes deeper than that – AI can identify “babies making funny faces” just as easily as a “shoe” and even identify pictures which are “calm”.
For the second AI-driven experiment from the Google Lab, colour is key.
Art Palette allows users to select colours and then “using a combination of computer vision algorithms, it matches artworks from cultural institutions from around the world with your selected hues”, explains Damien Henry, experiments team lead based at the Google Arts & Culture Lab, in a blog post.
“You can also snap a photo of your outfit today or your home decor and can click through to learn about the history behind the artworks that match your colours.”
For anyone more keen on practical applications, this tech could be used to find a piece of art which matches a colour scheme.
The final experiment focuses on the photographic collection of the Museum Of Modern Art (MoMA) in New York.
The gallery has taken pictures of its exhibitions since its first show in 1929, but the archive lacked information about the works in then.
Mr Henry added: “To identify the art in the photos, one would have had to comb through 30,000 photos — a task that would take months even for the trained eye.
“The tool built in collaboration with MoMA did the work of automatically identifying artworks — 27,000 of them — and helped turn this repository of photos into an interactive archive of MoMA’s exhibitions.”
In collaboration with @googlearts, we've made it easier than ever to connect the dots of #MoMAhistory with machine learning and computer vision technology. Learn more: https://t.co/lE556DAbB7 #GoogleArts pic.twitter.com/AjiQizO0Km— MoMA, the Museum of Modern Art (@MuseumModernArt) March 7, 2018
Google hopes that by sharing details of its experiments it will “lead you to explore something new, but also shape our conversations about the future of technology, its potential as an aid for discovery and creativity”.
The experiments are designed to be explored either on the dedicated website, or via the Google Arts & Culture app, available for iOS and Android.