Google disrupts AI hackers exploiting unknown weakness in firm’s digital defence

Google disrupts AI hackers exploiting unknown weakness in firm’s digital defence
Google said it observed a group of prominent ‘threat actors’ planning a big operation relying on a bug they had found (Thibault Camus/AP)

Google said it had disrupted a criminal group’s attempt to use artificial intelligence to exploit another company’s previously unknown digital vulnerability, adding to heightened worries across government and private industry about AI’s risks for cybersecurity.

Google shared limited information about the attackers and the target, but John Hultquist, chief analyst at the tech giant’s threat intelligence arm, said it represents a moment cybersecurity experts have warned about for years: malicious hackers arming themselves with AI to supercharge their ability to break into the world’s computers.

“It’s here,” Mr Hultquist said. “The era of AI-driven vulnerability and exploitation is already here.”

Google also did not reveal which group it suspected in the attack (PA)

It comes at a time of leaps in AI’s abilities to find vulnerabilities, including the Mythos model announced a month ago by Anthropic.

Among those trying to bolster their defences is US President Donald Trump’s White House, which has shifted its approach in how it plans to vet the most powerful AI models before their public release.

After following through with a campaign promise to repeal Democratic president Joe Biden’s guardrails around the fast-developing technology, the Republican administration and its allies are now sending mixed signals about the government playing a larger role in AI oversight.

“Some people don’t want there to be a regulatory response to this and others do,” said Dean Ball, a senior fellow at the Foundation for American Innovation who was previously a White House tech policy adviser and a lead author of Mr Trump’s AI policy roadmap last year.

“I don’t like regulation,” Mr Ball said.

“I would prefer for things not to be regulated. But I think we need to in this case.”

It’s here... The era of AI-driven vulnerability and exploitation is already here

Google said it observed a group of prominent “threat actors” planning a big operation relying on a bug they had found.

The vulnerability allowed them to bypass two-factor authentication to access a popular online system administration tool, which Google declined to name.

The company called it a zero-day exploit, a cyberattack that takes advantage of a previously unknown security vulnerability.

“Zero-day” refers to the fact that the security engineers have had zero days to develop a fix for the vulnerability.

Google said it notified the affected company and law enforcement and was able to disrupt the operation before it caused any damage.

But as it traced the hackers’ footprints, it found evidence they had used an AI large language model — the same technology that powers popular chatbots — to discover the vulnerability.

Among those trying to bolster their defences is US President Donald Trump’s White House (Jose Luis Magana/AP)

Google did not reveal which AI model was used in the cyberattack, only that it was most likely not Google’s own Gemini or Anthropic’s Claude Mythos.

Google also did not reveal which group it suspected in the attack but said there was no evidence it was tied to an adversarial government, though the company said groups tied to China and North Korea have been exploring similar techniques.

Mr Hultquist said that compared with government spies who typically work slowly and quietly, criminal hackers have some of the most to gain from AI’s “tremendous capability for speed” in finding and weaponising security bugs.

“There’s a race between you and them to stop them before they can essentially get whatever data they need to extort you with, or launch ransomware,” he said in an interview.

“AI is going to be a huge advantage because they can move a lot faster.”

Mr Trump’s Commerce Department announced last week that it signed new agreements with Google, Microsoft and Elon Musk’s xAI to evaluate their most powerful AI models before their public release, building on previous agreements the Biden administration made with Anthropic and ChatGPT maker OpenAI.

Some people don’t want there to be a regulatory response to this and others do.... I would prefer for things not to be regulated. But I think we need to in this case

But the announcement later disappeared from the Commerce Department website.

It was the latest example of jumbled signals from the Trump administration in the month since Anthropic announced a new model it called Mythos that it said was so “strikingly capable” at hacking and cybersecurity work that it could only release it to a small group of trusted organisations.

Anthropic created an initiative called Project Glasswing bringing together tech giants including Amazon, Apple, Google and Microsoft, along with other companies like JPMorgan Chase, in hopes of securing the world’s critical software from “severe” fallout that the new model could pose to public safety, national security and the economy.

But its relationship with the US government was complicated by a public and legal fight with the Pentagon and Mr Trump himself over military use of its AI technology.

Its top rival, OpenAI, has since introduced a similar model.

The company said on Friday it was releasing a specialised cybersecurity version of ChatGPT that would only be available to “defenders responsible for securing critical infrastructure” to help them find and patch vulnerabilities in their code.

More in this section

Cookie Policy Privacy Policy Brand Safety FAQ Help Contact Us Terms and Conditions

© Examiner Echo Group Limited