Quick Read
- Ofcom has launched a formal investigation into X over the use of its Grok AI chatbot to generate sexually explicit deepfake images, including those of women and children.
- The UK’s Online Safety Act requires platforms to protect users from illegal content, with potential penalties for X including fines of up to 10% of global revenue or £18 million, or a UK ban.
- Technology Secretary Liz Kendall stated that creating non-consensual intimate images became a criminal offense in the UK this week and accused X of ‘monetising abuse’.
- Victims have reported feeling ‘violated and dehumanised’ by Grok-generated images, with campaigners urging Ofcom for a ‘swift and decisive’ investigation.
- Malaysia and Indonesia have already blocked Grok, while Elon Musk characterized the UK’s actions as seeking ‘any excuse for censorship’.
In a move that underscores the escalating battle against technology-facilitated abuse, the UK’s media watchdog, Ofcom, has launched a formal and urgent investigation into Elon Musk’s social media platform, X. The probe centers on serious allegations that X’s AI chatbot, Grok, has been widely misused to generate and disseminate sexually explicit deepfake images, including deeply disturbing content involving women and children. This investigation, initiated in January 2026, reflects a growing global backlash against generative AI tools that produce realistic, yet harmful, content and a firm stance by regulators to hold tech giants accountable under new online safety legislation.
Reports of Grok being weaponized to create non-consensual intimate images and child sexual abuse material (CSAM) have sent shockwaves across the UK and beyond. An Ofcom spokesperson articulated the regulator’s grave concerns, stating, “Reports of Grok being used to create and share illegal non-consensual intimate images and child sexual abuse material on X have been deeply concerning.” The watchdog emphasized its commitment to determining whether X has failed to meet its obligations under the landmark Online Safety Act, particularly concerning the protection of UK citizens from illegal content. The implications are significant, with Ofcom having the power to impose hefty fines or even seek a court order to block X’s access in the UK.
The Online Safety Act: A New Era of Accountability
The formal investigation by Ofcom is a direct consequence of the recently enacted Online Safety Act, which places a legal duty on platforms like X to protect users from illegal content. Ofcom’s inquiry will scrutinize several critical areas: whether X failed to assess the risk of users encountering illegal content, whether it took appropriate steps to prevent viewing of such material, its speed in removing illegal content, its protection of users from privacy breaches, its assessment of risks to children, and the effectiveness of its age-checking mechanisms for pornography. As The Guardian reported, Ofcom had “urgently” contacted X regarding these concerns the previous Monday, highlighting the immediacy of the issue.
Technology Secretary Liz Kendall has been a vocal proponent of strong action, labeling sexually manipulated AI images as ‘weapons of abuse.’ She told MPs that creating non-consensual intimate images became a criminal offense in the UK this week, thanks to the Data (Use and Access) Act passed last year. Kendall condemned X for ‘monetising abuse’ by limiting Grok’s image creation function to paid subscribers, stating, “It is insulting to victims to say, ‘you can still have this service if you’re willing to pay,’ and it is monetising abuse.” She stressed that sharing intimate images without consent, or threatening to do so, is a criminal offense for both individuals and platforms under the Online Safety Act. Her predecessor, Peter Kyle, also expressed his dismay, telling BBC Breakfast that it was ‘appalling’ that Grok had ‘not been tested appropriately.’
Victims Speak Out and Campaigners Demand Swift Action
The human cost of this technological abuse is profound. Women have come forward describing feelings of being ‘violated and dehumanised’ after their images were digitally manipulated without their consent. Evie, a 22-year-old victim who spoke to The Independent, recounted being ‘bombarded with more than 100 sexualised images of herself in less than a week, including one that digitally stripped her naked.’ Dr. Daisy Dixon, another victim, expressed relief at the investigation, dismissing arguments of ‘censorship’ as a deflection from ‘systematic violence against women and girls.’
Campaigners are urging Ofcom to act with ‘swift and decisive’ action. Emma Pickering from the Refuge charity warned that any delay would leave women vulnerable to further abuse while X continued to profit from their objectification. She highlighted the broader goal of tackling technology-facilitated abuse to halve violence against women and girls within a decade. Pickering emphasized the necessity of holding tech companies accountable and requiring them to implement effective safeguards, rather than prioritizing profit over safety.
Global Response and Musk’s Counter-Argument
The controversy surrounding Grok’s misuse has not been confined to the UK. Malaysia and Indonesia have already taken decisive action, becoming the first countries to block access to Grok over its role in generating sexually explicit and non-consensual images. This international response underscores the widespread concern over the ethical implications and potential for abuse inherent in generative AI technologies.
Meanwhile, Elon Musk, the owner of X and xAI (Grok’s developer), has pushed back against the criticisms. In response to questions about why other AI platforms were not facing similar scrutiny, Musk suggested that the UK government was seeking ‘any excuse for censorship.’ This stance has been met with skepticism by those who argue it deflects from the core issue of harm. Nadhim Zahawi, a former Tory chancellor who recently defected to Reform UK, also voiced caution against a heavy-handed approach, arguing that attacking X based on the owner’s politics could be a ‘dangerous place to be.’ He contended that the kind of image manipulation seen with Grok could be done with simpler tools like PowerPoint, suggesting the issue is broader than just X’s AI.
The Path Forward: Penalties and Precedents
The potential consequences for X are significant. If found in breach of the Online Safety Act, Ofcom can levy a fine of up to 10% of X’s worldwide revenue or £18 million, whichever amount is greater. In extreme cases of non-compliance, Ofcom could seek a court order to compel internet service providers to block access to X across the UK. First Minister Michelle O’Neill described X’s response as ‘woefully inadequate’ and called for government intervention, though she noted she continues to use the platform while keeping her communications ‘under review.’
Lorna Woods, a professor of internet law at Essex University, noted to the BBC that the pace of Ofcom’s investigation is ‘hard to predict,’ as the regulator has discretion over its speed. While a business disruption order to block X could theoretically be applied immediately in ‘rare circumstances,’ it is typically a last resort. Clare McGlynn, a law professor at Durham University, suggested that the debate around a potential ban might be a ‘distraction’ from the fundamental issues. Regardless of the timeline, this investigation sets a crucial precedent for how regulatory bodies will engage with advanced AI technologies and their impact on online safety in the years to come.
The Ofcom investigation into X and Grok AI marks a pivotal moment in the governance of artificial intelligence and social media. It highlights the inherent tension between technological innovation and public safety, particularly when sophisticated tools are leveraged for abuse. While the debate around censorship versus accountability continues, the clear message from regulators and victims alike is that platforms must prioritize user protection and actively mitigate the risks posed by their technologies, especially when those risks involve non-consensual harm and child exploitation. The outcome of this probe will undoubtedly shape the future landscape of online content moderation and AI development.

