The circulation of explicit, nonconsensual images generated by Grok on X is intensifying debate over who bears legal responsibility for harmful artificial intelligence outputs, according to reporting by Axios.
Legal experts say the controversy exposes unresolved questions around liability when chatbots create defamatory or sexualized content.
While courts have largely sided with tech firms on the use of copyrighted data for AI training, newer lawsuits focus on whether companies are responsible for what their systems generate.
Elon Musk's Grok under fire for generating explicit AI images of minorshttps://t.co/YmXnVfAjlL
— Axios (@axios) January 2, 2026
Grok stands apart because it openly allows requests other chatbots reject and publishes user prompts and responses publicly. Attorneys argue this weakens claims that platforms are merely hosting third-party content.
Several scholars say protections under Section 230 of the Communications Decency Act may not apply when AI systems themselves create the material.
Government demands Musk's X deals with 'appalling' Grok AI deepfakes https://t.co/FIQf9vvoGd
— BBC News (UK) (@BBCNews) January 6, 2026
Despite the backlash, Grok’s parent company xAI has reported surging engagement and raised $20 billion in new funding.
Observers say upcoming court rulings and enforcement of new federal and state laws on deepfakes could define the future legal framework for AI-generated content.
Also read:



