Sunday, 1 February 2026

CNCB News

International News Portal

X Says It's Finally Doing Something About Grok's Deepfake Porn Problem, but It's Not Nearly Enough

X Says It's Finally Doing Something About Grok's Deepfake Porn Problem, but It's Not Nearly Enough

X says it's trying to stop intimate deepfakes from spreading on the site, but as of now, you can probably still generate them.

After weeks of pressure from both advocacy groups and governments, Elon Musk's X says it's finally going to do something about its deepfake porn problem. Unfortunately, after testing following the announcement, some are still holding their breath, including me.

When did the X deepfake porn controversy begin?

The controversy started earlier this January, after the social media site added a feature allowing X users to tag Grok in their posts and prompt the AI to instantly edit any image or video posted to the site, all without the original poster's permission. The feature seemingly came with few guardrails, and according to reporting done by AI authentication company Copyleaks, as well as statements victims have given to sites like Metro, posters on X quickly started using it to generate explicit or intimate images of real people, particularly women. In some cases, child sexual abuse material was also reportedly generated.

It's pretty upsetting stuff, and I wouldn't advise you to go looking for it. While the initial trend seemed to focus on AI photos of celebrities in bikinis, posters quickly moved on to manipulated images of regular people where they appeared to be pregnant, skirtless, or in some other kind of sexualized situation. While Grok was technically able to generate such imagery from uploaded photos before, the ease of access to it appeared to open the floodgates. In response to the brewing controversy, Musk had Grok generate a photo of himself in a bikini. However, the jokes ceased after regulators got involved.

Governments are starting to investigate

Earlier this week, the UK launched investigations into Grok's alleged deepfake porn, to determine whether it violated laws against nonconsensual intimate images as well as child sexual abuse material. Malaysia and Indonesia went a step further, actually blocking Grok access in the countries. On Wednesday, California began its own investigations, with Attorney General Rob Banta saying "I urge XAI to take immediate action to ensure this goes no further."

X is implementing blocks

In response to the pressure, X cut off the ability to tag Grok for edits on its social media site for everyone except subscribers. However, the Grok app, website, and in-X chatbot (accessible via the sidebar on the desktop version of the site) still remained open to everyone, allowing the flood of deepfaked AI photos to continue (said photos would obviously still pose the same problems even if generated solely by subscribers, although X later said the goal was to stem the tide and make it easier to hold users generating illegal imagery accountable).

The Telegraph reported on Tuesday that X also started blocking tagged Grok requests to generate images of women in sexualized scenarios, but that such images of men were still allowed. Additionally, testing by both U.S. and U.K. writers from The Verge showed that the banned requests could still be made to Grok's website or app directly.

Musk has taken a more serious tone in more recent comments on the issue, denying the presence of child sexual abuse material on the site, although various replies to his posts expressed disbelief and claimed to show proof to the contrary. Scroll at your own discretion.

While trying to finally put the controversy to bed, X said in a Safety blog on Wednesday that it would now be blocking all requests to the Grok account for images of any real people in revealing clothing, regardless of gender and whether coming from paid subscribers or not. But for anyone hoping that would mark the end of this, there appears to be some fine print.

Specifically, while the statement said that it would be adding these guardrails to all users tagging the Grok account on X, the standalone Grok website and app were not mentioned in the initial version of the draft. This has since been updated, as I was informed by an X spokesperson via email on Friday, but there still appear to be ways to generate these images.

For instance, the statement does say it will also block creation of such images in "Grok on X," referring to the in-X version of the chatbot, but it's not a total block. Instead, the imagery will be "geoblocked," meaning it will only be applied "in those jurisdictions where it's illegal." Geoblocks are also apparently apparently being added to the Grok app, as written in the statement, but a full block is not mentioned for anything other than the Grok account on X. The Grok website is also still not mentioned. Personally, even if this technically meets the legal bare minimum, it's not the choice I would make, since it leave ways for users to generate (or have others generate) the imagery on their own and then post it to the site manually.

It's important to note that, while the majority of the criticism lobbed at X during this debacle does not accuse the site of generating fully nude imagery, locations like the UK ban nonconsenual explicit imagery regardless of whether it is fully nude or not. Virtually undressing a subject down to their underwear without their consent could be enough to trigger legal action.

Some users can still generate sexualized deepfakes

It's the biggest crackdown X has made on these images yet, but for now, the holes are apparent. According to further testing by The Verge, the site's reporters were still able to generate revealing deepfakes even after Wednesday's announcement, by using the Grok app not originally mentioned in the update. On Thursday, when I attempted this using a photo of myself, both the Grok app and standalone Grok website still gave me full-body deepfaked images of myself in revealing clothing not present in the original shot. I was also able to generate these images using the in-X Grok chatbot, and some images changed my posing to be more provocative (which I did not prompt), too.

I tried again using the Grok app on Friday, after X's Safety blog was updated to mention that geoblocks were being applied to it, and was still able to generate these images.

As such, the battle is likely to continue. It's unclear whether the ability for some users to continue generating this imagery is an oversight, or if X is only seeking to block its most visible holes. One would hope the former, given that X said that it has "zero tolerance for any forms of child sexual exploitation, non-consensual nudity, and unwanted sexual content."

It is worth noting that I am located in New York State, which might not be part of the geoblock, although we do have a law against explicit nonconsensual deepfakes. It's possible the geoblock for my location is still being implemented, but it could also be an indication that the safeguards might still be failing.

When I reached out to X for clarification on the issue, I was pointed to the updated Safety blog, but not to a list of locations where geoblocks are being implemented, which I also requested. I will continue to try to get answers on this front, and will update this post if I hear back.

In the meantime, while governments continue their investigations into the legal situation, others are calling for more immediate action from app stores. A letter sent from U.S. Senators Ron Wyden, Ben Ray Lujan, and Ed Markey to Apple CEO Tim Cook and Google CEO Sundar Pichai argues that Musk's app now clearly violates both App Store and Google Play policies, and calls on the tech leaders to "remove these apps from the [Apple and Google] app stores until X's policy violations are addressed."