Briefly
Australia’s eSafety Commissioner flagged a spike in complaints about Elon Musk’s Grok chatbot creating non-consensual sexual pictures, with stories doubling since late 2025.
Some complaints contain potential baby sexual exploitation materials, whereas others relate to adults subjected to image-based abuse.
The considerations come as governments worldwide examine Grok’s lax content material moderation, with the EU declaring the chatbot’s “Spicy Mode” unlawful.
Australia’s unbiased on-line security regulator issued a warning Thursday in regards to the rising use of Grok to generate sexualized pictures with out consent, revealing her workplace has seen complaints in regards to the AI chatbot double in latest months.
The nation’s eSafety Commissioner Julie Inman Grant mentioned some stories contain potential baby sexual exploitation materials, whereas others relate to adults subjected to image-based abuse.
“I am deeply involved in regards to the rising use of generative AI to sexualise or exploit folks, notably the place kids are concerned,” Grant posted on LinkedIn on Thursday.
The feedback come amid mounting worldwide backlash in opposition to Grok, a chatbot constructed by billionaire Elon Musk’s AI startup xAI, which may be prompted straight on X to change customers’ images.
Grant warned that AI’s means to generate “hyper-realistic content material” is making it simpler for unhealthy actors to create artificial abuse and more durable for regulators, regulation enforcement, and child-safety teams to reply.
In contrast to rivals reminiscent of ChatGPT, Musk’s xAI has positioned Grok as an “edgy” various that generates content material different AI fashions refuse to provide. Final August, it launched “Spicy Mode” particularly to create specific content material.
Grant warned that Australia’s enforceable trade codes require on-line companies to implement safeguards in opposition to baby sexual exploitation materials, whether or not AI-generated or not.
Final 12 months, eSafety took enforcement motion in opposition to widely-used “nudify” companies, forcing their withdrawal from Australia, she added.
“We have now entered an age the place firms should guarantee generative AI merchandise have applicable safeguards and guardrails in-built throughout each stage of the product lifecycle,” Grant mentioned, noting that eSafety will “examine and take applicable motion” utilizing its full vary of regulatory instruments.
Deepfakes on the rise
In September, Grant secured Australia’s first deepfake penalty when the federal courtroom fined Gold Coast man Anthony Rotondo $212,000 (A$343,500) for posting deepfake pornography of outstanding Australian girls.
The eSafety Commissioner took Rotondo to courtroom in 2023 after he defied elimination notices, saying they “meant nothing to him” as he was not an Australian resident, then emailing the pictures to 50 addresses, together with Grant’s workplace and media retailers, in keeping with an ABC Information report.
Australian lawmakers are pushing for stronger protections in opposition to non-consensual deepfakes past current legal guidelines.
Impartial Senator David Pocock launched the On-line Security and Different Laws Modification (My Face, My Rights) Invoice 2025 in November, which might enable people sharing non-consensual deepfakes to be fined $102,000 (A$165,000) up-front, with firms going through penalties as much as $510,000 (A$825,000) for non-compliance with elimination notices.
“We at the moment are dwelling in a world the place more and more anybody can create a deepfake and use it nevertheless they need,” Pocock mentioned in a assertion, criticizing the federal government for being “asleep on the wheel” on AI protections.
Usually Clever E-newsletter
A weekly AI journey narrated by Gen, a generative AI mannequin.