Social media platform X has announced new safeguards to prevent its AI tool, Grok, from generating or editing images that undress real people. The decision follows intense backlash from governments, regulators, campaigners, and victims over the spread of sexualised AI deepfakes on the platform.
The move marks a significant policy shift for X after weeks of criticism over how Grok was used to manipulate images of women and public figures into revealing or sexualised content.
What Prompted X to Act on Grok AI
The backlash gained momentum after users began sharing AI-edited images created using Grok. These images showed real people in bikinis, underwear, or sexually suggestive clothing without consent.
Many of the images circulated publicly on X. Campaigners said the visibility and scale of the abuse caused lasting reputational and emotional harm. Victims described feeling humiliated and unsafe.
Grok was launched in 2023 by X owner Elon Musk as a competitor to other generative AI tools. However, critics argued that safety controls failed to keep pace with its capabilities.
What X Has Changed About Grok AI
X confirmed that it has introduced technological measures to block Grok from editing images of real people into revealing or sexualised forms in jurisdictions where such content is illegal.
The platform said it has implemented location-based restrictions, known as geoblocking. These controls prevent users in certain countries from generating images of real people in bikinis, underwear, or similar attire.
X also reiterated that only paid users can access Grok’s image-editing features. The company claims this helps increase accountability when users violate platform rules.
UK Government and Regulator Response
The UK government welcomed the announcement, describing it as validation of earlier demands for stronger controls on AI-generated abuse.
Media regulator Ofcom called the move a welcome step but confirmed that its investigation into X remains active. Ofcom is examining whether the platform breached UK laws related to online safety and harmful content.
Technology Secretary Liz Kendall said regulators must still establish the facts fully. She stressed that enforcement, not promises, would determine compliance.
Victims Say the Damage Is Already Done
Campaigners and victims argue that X acted too late. Journalist and campaigner Jess Davies said Grok was used to edit images of her and other women.
She said the abuse felt worse because the images appeared openly on X. According to Davies, the platform’s response focused on legal minimums rather than genuine protection.
Davies warned that many women now live with long-term harm caused by AI image abuse. She said the changes, while positive, cannot undo what already happened.
Academics and Advocacy Groups Weigh In
Dr Daisy Dixon from Cardiff University described X’s reversal as a partial victory for campaigners. She said the abuse should never have occurred.
Dixon explained that AI image manipulation distorts how women experience their bodies and public identity. She added that platforms must act before harm emerges, not after outrage.
Andrea Simon, director of the End Violence Against Women Coalition, said the decision shows that pressure works. She urged tech firms to take proactive steps as AI-generated harms evolve.
Political Pressure Intensifies Worldwide
X announced the changes shortly after California’s top prosecutor confirmed an investigation into sexualised AI deepfakes. The probe includes concerns over content involving children.
Earlier, Musk defended X by claiming critics wanted to suppress free speech. He later shared AI-generated images of UK Prime Minister Keir Starmer in a bikini, which sparked further political backlash.
Starmer warned that X could lose the right to self-regulate. He said the government would strengthen laws if platforms failed to protect users.
Questions Over Enforcement and Geoblocking
Despite the announcement, experts remain sceptical about enforcement. Critics question how Grok will reliably detect whether an image shows a real person.
Policy researcher Riana Pfefferkorn said X should have removed the feature immediately after abuse began. She also questioned whether geoblocking would work.
Users often bypass location restrictions using virtual private networks. VPN use surged in the UK after new age-verification rules took effect, raising concerns that Grok safeguards may face similar challenges.
Ofcom Investigation Continues
Ofcom confirmed it is investigating whether X failed to meet its legal duties. If the regulator finds serious breaches, it could seek court orders forcing internet providers to block X in the UK.
The regulator said platforms must show real compliance, not policy announcements alone. Enforcement action remains possible if X fails to meet legal standards.
A Turning Point for AI and Platform Responsibility
The Grok controversy highlights growing global concern over AI-generated abuse. Critics argue that platforms often prioritise innovation over safety.
X’s policy change signals a shift, but campaigners say it must go further. As governments increase scrutiny, AI tools may soon face stricter regulation.
The coming months will test whether X’s safeguards work in practice. Regulators, victims, and users will watch closely to see if platforms can balance innovation with responsibility.













