Elon Musk’s social media platform X (formerly Twitter) has restricted access to its AI-driven image editing tool, Grok, to paying subscribers after it came under heavy criticism for enabling the creation of harmful and explicit content. The controversy sparked outrage after users exploited the tool to create sexualized deepfake images, including non-consensual alterations of images of women and minors. In response to the backlash, X has limited the image editing feature to users who subscribe to its paid service, requiring them to have their payment information on file. However, critics argue that this move fails to address the deeper issues surrounding Grok’s potential for abuse, and that the platform must take more responsibility in preventing such harmful uses of its technology.
Let’s explore the details of this evolving situation, the ethical concerns it raises, and the broader implications of AI-driven content manipulation in today’s digital landscape.
The Backlash Over Deepfake Content: A Growing Concern
The use of deepfake technology has become a significant point of concern as AI tools continue to advance, offering both positive applications and serious risks. Grok, a tool that allows users to request specific AI-generated responses to posts, was originally designed to enhance engagement on the platform by allowing people to request image edits, text generation, and more. However, some users began to misuse the tool by requesting that Grok digitally alter images in ways that were not only non-consensual but also deeply exploitative. Among these requests were demands to create sexually explicit images by digitally undressing people without their consent—many of these requests targeted public figures, including women, while others focused on minors.
The most alarming reports involved “criminal imagery” that appeared to be generated using Grok, including sexually explicit and abusive content featuring young girls, aged between 11 and 13. These developments were not just a violation of consent but also a breach of legal boundaries regarding child exploitation material. The BBC and several advocacy organizations quickly flagged the issue, sparking an intense public outcry.
Professor Clare McGlynn, an expert in the legal regulation of pornography and sexual violence, criticized the tool’s creators for failing to address the platform’s potential for harm. She stated that X’s decision to restrict access to Grok was a superficial fix and that the company had failed to take adequate precautions from the outset to prevent such abuse. In her view, Musk’s platform should have anticipated these issues before allowing Grok to be used in ways that exploit individuals and violate their rights.
Grok’s Restriction to Paid Subscribers: A Controversial Response
In the face of the deepfake scandal, X made the decision to limit access to Grok’s image editing functions to paying subscribers. Users who attempt to request image alterations are now informed that these features are only available to individuals who subscribe to X’s premium service. The notification reads: “Image generation and editing are currently limited to paying subscribers. Subscribe to unlock these features.”
While some users have welcomed the change, arguing that it may reduce the tool’s misuse, critics remain unconvinced. The subscription requirement only applies to verified accounts—those that have a blue checkmark, which is a feature available exclusively to paid users of the platform. As a result, Grok’s image editing tool is now largely restricted to those who can afford the service, leaving most of X’s user base unable to access the feature.
Dr. Daisy Dixon, one of the women who reported being targeted by Grok users to generate sexualized images of her, expressed mixed feelings about the move. While she appreciated the restrictions, she argued that this was only a “sticking plaster” rather than a comprehensive solution to the issue. Dixon called for Grok to be entirely redesigned with built-in ethical guardrails to prevent further abuse. She emphasized that Musk and X’s leadership needed to acknowledge the scale of the harm caused and address it more substantively.
The Role of Gender-Based Abuse in AI Misuse
Grok’s misuse raises broader issues about gender-based violence and exploitation in the digital age. The fact that Grok was used primarily to create exploitative content targeting women speaks to the ongoing prevalence of online abuse and harassment. The digital manipulation of images to create sexualized depictions of individuals without their consent is not only a violation of personal privacy but also contributes to a broader culture of gender-based violence.
Professor McGlynn further emphasized that this kind of abuse is not an isolated incident but rather part of a larger trend of gendered digital violence. She noted that X’s response to the situation mirrored its previous handling of deepfake videos, such as the pornographic deepfakes created using Grok’s video feature of pop star Taylor Swift. Instead of proactively preventing harmful content from being created, X has consistently waited until significant public pressure forced a response. This reactive approach, McGlynn argued, undermines the platform’s ability to take responsibility for its tools and their potential for misuse.
Impact on Victims: Dehumanization and Humiliation
One of the most troubling aspects of Grok’s misuse is the emotional and psychological toll it takes on its victims. Women and girls whose images were altered to create sexualized content reported feeling humiliated, dehumanized, and violated. Many expressed a sense of helplessness, knowing that their images were being manipulated by strangers for the purposes of sexual exploitation. This kind of digital abuse is especially damaging because it can remain online indefinitely, potentially harming the reputations of its victims and leading to lasting trauma.
In the case of minors, the harm is even more severe. The discovery of child exploitation content generated using Grok is a chilling reminder of the dangers posed by AI tools that lack adequate safeguards. Advocacy groups like the Internet Watch Foundation have condemned the platform for allowing the creation and circulation of such harmful content, and many have called for stronger regulations to prevent such tools from being used for exploitation.
Legal and Regulatory Pressure on X
The deepfake scandal has led to mounting calls for regulatory action. In the UK, government officials have urged Ofcom, the media regulator, to use its full powers under the Online Safety Act to address the issue. Ofcom has the authority to block X’s access to the UK market if it fails to comply with regulations designed to prevent the spread of harmful content. Prime Minister Sir Keir Starmer has expressed strong support for Ofcom’s efforts, labeling the use of Grok to generate sexualized images of minors as “disgraceful” and “disgusting.”
The government’s backing of Ofcom signals that this issue is being taken seriously at the highest levels of leadership. Ofcom’s involvement is critical, as it has the power to force platforms like X to take immediate action to remove harmful content, as well as implement safeguards to prevent future abuses. However, critics argue that the platform’s current response—limiting access to paying subscribers—does little to address the root cause of the problem and fails to create meaningful change.
Grok’s Future: Ethical Redesign or Perpetual Controversy?
As AI tools like Grok become more powerful and widespread, it is essential for platforms to design these technologies with ethical considerations in mind. The current scandal highlights the urgent need for stronger protections against the misuse of AI in creating harmful content. Whether or not X takes the necessary steps to redesign Grok and implement safeguards remains to be seen.
A true solution would involve not only limiting access to AI features for paying users but also embedding ethical frameworks within the AI’s design to prevent it from being used for harmful purposes in the first place. This could include implementing better content moderation tools, ensuring that requests for image manipulations are properly monitored, and providing users with clear guidelines on what constitutes acceptable content.
Ultimately, the controversy surrounding Grok is a reminder that, as AI technology continues to evolve, so too must the regulations and ethical standards that govern its use. The harm caused by deepfake content is real, and platforms must take responsibility for the tools they create and the potential consequences of their misuse.













