Minors Sue Elon Musk's xAI Over Grok-Generated Child Sexual Abuse Material
Image: The Verge

Minors Sue Elon Musk's xAI Over Grok-Generated Child Sexual Abuse Material

16 March, 2026.Crime.8 sources

Key Takeaways

  • Three teenage girls filed a class-action against xAI alleging Grok created CSAM from their photos.
  • Lawsuit claims Grok altered real minor photos into explicit material and circulated manipulated images.
  • Filing location contested: California per Guardian/Engadget; Tennessee per The Verge.

Lawsuit Filed Against xAI

The lawsuit, filed in California where xAI is headquartered, represents the first legal action by minors following Grok's widespread generation of nonconsensual nude images earlier this year.

Image from Ars Technica
Ars TechnicaArs Technica

The plaintiffs, identified as Jane Doe 1, Jane Doe 2, and Jane Doe 3, claim that xAI intentionally designed Grok to 'profit off the sexual predation of real people, including children' despite knowing the dangerous consequences.

The suit seeks class-action status representing 'at least thousands of minors' who may have been victimized by the AI system's ability to manipulate real photos into sexually explicit content.

CSAM Distribution Methods

The lawsuit details how the victims discovered their images had been manipulated and distributed online.

One plaintiff, referred to as Jane Doe 1, received an Instagram message in December from an anonymous user alerting her that someone had uploaded AI-altered nude videos and images of herself and other girls from her high school to a Discord server.

Image from BlogNT
BlogNTBlogNT

According to the complaint, at least five files depicted her actual face and body in familiar settings but morphed into sexually explicit poses.

The perpetrator allegedly used these AI-generated CSAM 'as a bartering tool in Telegram group chats with hundreds of other users, trading her CSAM files for sexually explicit content of other minors.'

Criminal investigators later discovered that the images had also been shared on Telegram, where they were allegedly being used as currency to trade for other child sexual abuse material.

The images showed the victims' entire bodies, including their genitals, without clothing.

Safety Failures and Scale

The lawsuit alleges that xAI failed to implement basic safety measures that other advanced AI labs use to prevent their systems from generating sexual content involving real people, especially children.

Controversies, corrections, and investigations have defined Grok's most challenging week to date, the AI developed by Elon Musk's xAI

Business Insider EspañaBusiness Insider España

The complaint argues that Grok lacked filters designed to block attempts to create child exploitation material or manipulate photos of identifiable individuals.

Lawyers for the plaintiffs point to Elon Musk's public promotion of Grok's ability to generate sexualized images and depict real people in revealing clothing as evidence of intentional design.

The filing contends that allowing the system to generate erotic content from real photos makes it extremely difficult to prevent the creation of sexual images involving minors.

According to researchers at the Center for Countering Digital Hate, Grok had produced about 3 million sexualized images in less than two weeks, with approximately 23,000 of those depicting children at the peak of the scandal.

Musk's Response

Elon Musk has consistently denied knowledge of Grok generating child sexual abuse material, claiming in January that he was 'not aware of any naked underage images generated by Grok. Literally zero.'

He also alleged that Grok would not generate any illegal images and that its operating principle was to follow local laws.

Image from Engadget
EngadgetEngadget

However, his public statements contradict evidence of Grok's widespread generation of nonconsensual sexualized content.

In response to the scandal, xAI announced in January that it would stop allowing people to use Grok to edit images of real people into bikinis and limit the image-generation feature to paid subscribers.

The company has not yet responded to requests for comment regarding the lawsuit.

Musk's denials continue to face scrutiny as evidence mounts that Grok was capable of generating CSAM despite his claims.

Legal and Regulatory Response

Musk and xAI faced intense scrutiny after Grok flooded X with explicit images of adults and minors, sparking nationwide calls for investigation by the Federal Trade Commission, a probe from the European Union, and a warning from UK Prime Minister Keir Starmer.

Image from OnMSFT
OnMSFTOnMSFT

In January, the Senate passed a bill that would allow victims of nonconsensual deepfakes to sue the people who created the images.

The Take It Down Act, signed into law by President Donald Trump in 2025, will criminalize the distribution of nonconsensual, AI-generated deepfakes when it goes into effect in May.

The plaintiffs are seeking civil penalties under several laws designed to protect children from exploitation and punish corporate negligence.

The case highlights the urgent need for stronger safeguards in AI development to prevent the creation of abusive and illegal content.

More on Crime