Unlock the Editor’s Digest for free
Roula Khalaf, Editor of the FT, selects her favourite stories in this weekly newsletter.
Elon Musk’s artificial intelligence chatbot has generated sexualised images of children that have been shared on social media platform X, raising concerns about the safety of a model used by millions.
Over the past few days, users have been able to get Grok, the AI chatbot developed by Musk’s xAI, to create sexual images of children, which goes against the company’s own user guidelines.
Grok on Friday blamed the issue on “lapses in safeguards”, and said the images had been removed. In a post on X, the chatbot said child sexual abuse material (CSAM) was “illegal and prohibited”, adding that it was fixing the lapses “urgently”.
xAI did not immediately respond to a request for comment.
It comes as the chatbot has been hit by several glitches over the past year. In July, Grok repeatedly praised Adolf Hitler and shared antisemitic rhetoric.
The latest incident will raise further concerns about how easy it is to override safety guardrails in AI models. It comes as both the tech industry and regulators grapple with the far-reaching social impact of generative AI.
Generative AI has led to an explosion in AI-generated sexual images of children and non-consensual deepfake nude images, as freely available AI models with no content safeguards and “nudify” apps make generating illegal images easier than ever before.
The Internet Watch Foundation, a UK-based non-profit, said that AI-generated child sexual abuse imagery had doubled in the past year, with material becoming even more extreme.
Grok has been intentionally designed to have fewer content guardrails than competitors, with Musk calling the model “maximally truth-seeking”.
xAI released Grok 4, its latest, most powerful model, in July, which has a “Spicy Mode” feature that allows users to generate risqué and sexually suggestive content for adults.
Musk’s two-year-old AI start-up acquired X in March in an all-stock deal for $45bn. The transaction valued the combined company at $113bn. X incorporates some xAI features, such as Grok, directly into the platform.
Laws governing harmful AI-generated content are patchy. In May 2025, the US signed into law the Take It Down Act, which tackles so-called AI-generated “revenge porn” and deepfakes.
The UK is also working on a bill to make it illegal to possess, create or distribute AI tools that can generate CSAM, and to require AI systems to be thoroughly tested to check they cannot generate illegal content.
In 2023, researchers at Stanford University found that a popular database used to create AI-image generators was full of CSAM.

