Grok's explicit image scandal highlights international legal flaws

Regulatory frameworks on anonymity, editorial responsibility, and explicit images must be reviewed 
Grok's explicit image scandal highlights international legal flaws

'An especially concerning aspect of the Grok story is that the AI system has been producing images in response to public messages on X, posting those images publicly on the platform.' File picture: Yui Mok/PA

Over the past number of weeks there has been a lot of debate about how Grok, Elon Musk’s generative AI system, has been used to produce sexualised images of women and children. This has been a truly shocking development. 

An especially concerning aspect of the Grok story is that the AI system has been producing images in response to public messages on X, and posting those images publicly on the platform, without the consent of the person that is being depicted, and anyone can read the interaction with Grok which, in itself, is shocking.

In Ireland, it is a crime to share intimate images of a person without their consent. Indeed, so much as threatening to do so is criminal. It is also criminal to produce, share, or disseminate images that fall into the category of “child sex-abuse material”. 

The potential harm caused to those depicted in these images is deep and profound.

In addition to the criminal dimension, there are also regulations governing the technologies behind this scandal. In the UK, Ofcom has launched an investigation into X this week and can levy fines of £18m or 10% of worldwide revenue, whichever is the greater. In Ireland we have multiple pieces of relevant legislation such as the Harassment, Harmful Communications and Related Offences Act 2020 (Coco’s Law) and the Online Safety Media Regulation Act 2022 with its associated Online Safety Code. 

Online safety is regulated by Coimisiún na Meán, and they are already engaging with X on this matter. At a European level, the AI Act regulates AI systems such as Grok while the Digital Services Act regulates platforms such as X. The potential fines are significant in each case. The Irish Council for Civil Liberties and Digital Rights Ireland are calling on An Garda Síochána to investigate X. All the regulatory and legal mechanisms available are in play.

In the UK, Ofcom has launched an investigation into X this week and can levy fines of £18m or 10% of worldwide revenue, whichever is the greater. File picture: Noah Berger/AP
In the UK, Ofcom has launched an investigation into X this week and can levy fines of £18m or 10% of worldwide revenue, whichever is the greater. File picture: Noah Berger/AP

So is the problem solved?

Internet-scale platforms such as X operate on a global scale. The number of monthly active users on X is between 500-600m, almost half of them follow Elon Musk. On a daily basis more than 130m people use X. Over 56m users of the platform on a monthly basis are based in the US. The largest user populations are in the US, Japan, India, Indonesia and the UK. In the EU the largest user populations are in Germany, France, and Spain.

A feature of X is that it doesn’t have a real-name policy. Accounts can be effectively anonymous. It is estimated that around 25% of accounts are either partially or fully anonymous. Anonymous users participate more actively in sensitive discussions on topics such as politics, sexuality, and religion. 

It seems like the more sensitive a topic, the more active the participation of anonymous user accounts. It is estimated that almost 40% of accounts that engage in pornographic or sexually explicit content are anonymous.

So while we have very strong legislation that criminalises the production of sexualised images of women and children, the perpetrators of these crimes typically hide behind a cloak of anonymity, making it impossible for law enforcement to pursue the individuals involved.

In the last few days Elon Musk has announced that access to Grok has been restricted to paid subscribers. This somewhat weakens anonymity on the platform since credit card details would be associated with each paid user. If these users break the law, they could be found with the cooperation of X. Of course, this does not address the entirety of the issue and it doesn’t mitigate the harms that have been perpetrated already, and might continue by other means on platforms such as X.

In the UK Prime Minister Starmer has raised the possibility of banning access to X. Unfortunately, banning access to web-sites and platforms isn’t effective. All one needs is access to a VPN to circumvent such restrictions. And, again, the harms are not addressed.

The challenge with online harmful content is that is appears everywhere at once. Traditional approaches to regulating media, or investigating crime, rely on a complaint being made by someone who has been impacted by an specific incident. On the internet, tens or hundreds of thousands of people can be harmed by the creation or exposure to harmful content in an instant. The production of harmful content can be automated to give it scale as well as impact.

Ireland and the world needs to develop a comprehensive taxonomy of online harm that sets out clearly what it is, its impacts, the nature of the harm it causes, and how it might be remedied. Such a taxonomy of harm would help ensure we have the necessary legal responses, counselling supports, and other mechanisms for dealing with it. It might surprise many that in the EU AI Act amongst the list of banned uses of AI there is no mention of the production of sexualised content of women and children.

Sites and responsibility 

Online platforms have typically operated without responsibility for the content that they carry. They are providers of the digital vehicle for supporting communication, but they have no editorial responsibility. 

This is an operating principle that needs to be reconsidered. In a context in which vast sums of money are made from social media platforms and there is potential for enormous harm, something has to give. We have seen for weeks now how Elon Musk has not taken the problems with Grok seriously enough. When social media platforms are so powerful, there must be a commitment to corporate responsibility and accountability of the highest standards.

To properly regulate the online world, we must look at the role of real-name policies online. There are legitimate reasons why anonymity might be important in some situations, but not as a default option. The responsibility and accountability of platforms for the content that they disseminate and host must be considered carefully. 

Corporate accountability, and that of individuals with authority in these organisations, must be clearly set out. Appropriate guardrails preventing AI systems from generating responses to inappropriate prompts must be designed and tested before these systems are deployed publicly. If an AI system is unsafe, access to it should be removed until it is. The regulatory frameworks already in place nationally and internationally need to be reviewed and updated in the context of a fast moving technological landscape. None of this is impossible. It is necessary.

  • Barry O’Sullivan is a professor at the School of Computer Science & IT at University College Cork, founding director of the Insight Research Ireland Centre for Data Analytics at UCC and the Research Ireland Centre for Research Training on Artificial Intelligence, a member of the Irish Government’s AI Advisory Council, and former vice chair of the European High-Level Expert Group on AI.

More in this section

Revoiced

Newsletter

Sign up to the best reads of the week from irishexaminer.com selected just for you.

Cookie Policy Privacy Policy Brand Safety FAQ Help Contact Us Terms and Conditions

© Examiner Echo Group Limited