Google Ireland’s chief on deepfakes, ethics, and EU regulation

An experiment shows Google's Gemini AI can create sensitive, misleading images of real individuals from public photos despite implemented safeguards.
Google Ireland’s chief on deepfakes, ethics, and EU regulation

Head of Google Ireland, Vanessa Hartley, says they want their AI LLMs to be compliant with the EU’s code of practice.

The technology behind large language models (LLMs) remains “extremely nascent” and will need to be improved further over time, the head of Google Ireland has said.

In an interview with the Irish Examiner, vice president of large customer sales across Google’s EMEA region and head of Google Ireland, Vanessa Hartley, said that some of the capabilities possessed by the tech giant’s generative AI platform Gemini are “not ideal”, with ongoing discussions taking place about the LLM’s code of practice.

Released at the end of 2023 in response to the AI boom and launch of OpenAI’s GPT-4, Gemini offers a chatbot function, deep reasoning, and advanced coding capabilities, with 75% of Google’s code now being written by AI.

Significantly, Gemini is also capable of producing audio and image generation, with its latest Nano Banana 2 model offering some of the most advanced, hyper-realistic image modelling at the touch of a button.

Recent months have demonstrated what can happen when LLMs’ image-generation capabilities are used maliciously, exhibited by Elon Musk’s Grok AI assistant, which, earlier this year, created non-consensual nude images of real people, most of which depicted women or children.

The incident, which sparked regulatory and ethical concerns worldwide, led to around 3m sexualised images being created by Grok in just 11 days.

Google has implemented several layers of safeguards to prevent the creation of such images on Gemini, with Ms Hartley explaining: “You cannot reproduce pictures of famous people or copyrighted pictures. You can only do it with your own photographs.

“We are very keen to make sure we stay within the parameters of what’s right.”

Gemini itself says that protecting users from deepfakes and non-consensual imagery is a “core part” of its safety design, with its underlying models being trained to filter the prevention of sexually explicit content, real person likenesses, and non-consensual edits. The chatbot also says it cannot depict images of any public figures.

But do these current guardrails protect people enough in the era of AI-made content and image misinformation?

An experiment by the Irish Examiner found that non-copyrighted images of individuals, which are largely available online and across personal social media profiles, could still be used by Gemini to depict sensitive images of people and spread misinformation.

An image created by Gemini using a publicly available reference photograph depicted this journalist as a delegate at the ard fheis of a major Irish party, despite having no affiliation with any Irish political organisation. It took just one prompt for Gemini to create this image.

Asked if she forecasted issues regarding Gemini being used in this way, Ms Hartley said: “All these different topics are being dealt with in the code of practice. We’re working really hard with the commission to make sure we have real clarity on what can, will, and should happen with AI.

“We talk a lot about it, but this is an extremely nascent technology. In reality, it’s only had commercial models for two or three years.

“I know that over time, we will be able to work through all those different topics.”

Many LLMs have been released by their respective owners while still in their infancy, leading to issues with societal biases, a lack of appropriate safeguards, and hallucinations, which refer to AI systems generating responses that are nonsensical, fabricated, and not based on any factual data.

As Ms Hartley explains: “We’re building through more use cases now, including ones like this image, where our teams would work to make sure that that does or does not happen again.

“We want our models to be compliant with the EU’s code of practice,” Ms Hartley said, adding that AI is “too important not to regulate”, and needs to be built responsibly.

“Different legislations will allow for different things, but I imagine that over time, these things will be clearer.”

Asked if this allows for issues to arise in the short term as the final regulations are ironed out, the head of Google Ireland added: “I mean, the example you shared is not ideal.

“I think, in reality, we’re going to have to work through that, to be honest with you.”

x

More in this section

The Business Hub

Newsletter

News and analysis on business, money and jobs from Munster and beyond by our expert team of business writers.

Cookie Policy Privacy Policy Brand Safety FAQ Help Contact Us Terms and Conditions

© Examiner Echo Group Limited