Unlock the Editor’s Digest for free
Roula Khalaf, Editor of the FT, selects her favourite stories in this weekly newsletter.
Elon Musk has responded to pressure to stop users of his Grok AI model from generating fake sexualised images of real people, following an outcry over the proliferation of the content on his X platform.
The billionaire entrepreneur came under scrutiny last week after thousands of users began using his Grok AI model to generate sexualised deepfakes of women and, in some cases, children, sharing them on his X social media platform as well as the separate Grok app, both of which are run by Musk’s xAI start-up.
UK Prime Minister Sir Keir Starmer on Wednesday said that Musk’s social media platform X, which is part of xAI, had indicated to government officials that it was acting to comply with the country’s online safety laws by blocking the generation of non-consensual sexual imagery.
The climbdown came after the UK government announced it would accelerate the enforcement of new powers making it a criminal offence to create non-consensual intimate images, including with AI.
“We have made clear to X that these images are illegal, reprehensible and need to be dealt with,” Starmer said. “I have been informed that X is acting to ensure full compliance with UK law. If so, that is welcome, but we’re not going to back down — they must act.”
“We will take the necessary measures. We are strengthening existing laws. If we need to strengthen them further, we are prepared to do that,” he added.
Musk had last week pushed back against threats by the UK government to ban the platform over the issue, saying they wanted “any excuse for censorship” and arguing that X was being singled out where other rivals offered similar capabilities.
The world’s richest man has pushed for his own AI products to have fewer content “guardrails” than competitors such as OpenAI and Google in line with his “free speech” ideals, but also in a bid to drive downloads of his Grok app, according to insiders.
However, in a post on X on Wednesday, Musk said: “When asked to generate images, it will refuse to produce anything illegal, as the operating principle for Grok is to obey the laws of any given country or state.”
He added that he was “not aware of any naked underage images generated by Grok. Literally zero.”
xAI did not respond to a request for comment.
The issue has sparked a global outcry. The California attorney-general on Wednesday said the US state had opened an investigation into Grok and xAI “over undressed, sexual AI images of women and children”.
A spokesperson for the European Commission said the EU “took note of the additional measures X is taking to ban Grok from generating sexualised images of women and children” and added that “should these changes not be effective, the commission will not hesitate to use the full enforcement toolbox of the DSA”, referring to its Digital Services Act, which polices content online.
Jonathan Lewis, the UK managing director of X, told the UK magazine Campaign: “The X platform has been restricted to no longer allow the editing of images of real people in revealing clothing. So, for example, the issue of some users choosing to put people in bikinis.”
Last Friday, xAI said it was limiting the use of its Grok image generator to paid subscribers only, following the threats of fines and bans in the EU, UK and France, and the announcement of an investigation by Britain’s Ofcom regulator. Grok has also been banned in Indonesia and Malaysia.
As of late on Tuesday, Grok appeared to be ignoring requests from a number of X users across the globe to generate sexualised deepfakes, however some were still being generated, according to data provided by the Institute for Strategic Dialogue think-tank.
The number of requests for non-consensual deepfakes exploded earlier in January, particularly after Musk shared a fake image of himself in a bikini.
It also came as xAI has experienced turnover in its safety teams. In the first week of December, xAI’s head of product safety Vincent Stark left the company, along with one of its top AI safety researchers Norman Mu and Alex Chen, who oversaw the fine-tuning of Grok’s personality and behaviour.
Additional reporting by Barbara Moens in Brussels

