The latest warning from Australian regulators has elevated the issue of image misuse faced by Grok from regional risk to an international focus. According to the latest reports, Australian eSafety Commissioner Julie Inman Grant stated that complaints about Grok generating non-consensual gendered images have doubled compared to the end of 2025, with some involving child pornography content. This is not an isolated incident in Australia but a concentrated outbreak of Grok’s feature abuse worldwide.
How Serious Is the Problem
Grok’s “Spicy Mode” Becomes a Hotspot
The built-in “Spicy Mode” feature of Grok is accused of directly fostering AI deepfake abuse. This feature allows users to generate more controversial content, including non-consensual sexualized images. According to the latest data, the number of sexually suggestive AI images published per hour by Grok is 84 times the combined total of the five major deepfake websites. This figure indicates that the scale of the problem has become completely out of control.
User Base Behind the Surge in Complaints
Grok has approximately 600 million monthly active users (combined on X platform and Grok app), which means the base for abuse is extremely large. While doubling complaints reflect the severity of the issue, it is more likely just the tip of the iceberg, as many victims may not have formally reported incidents.
Global Regulatory Cooperation
The Australian warning is not isolated but part of a coordinated response by multiple regulatory agencies:
Country/Region
Regulatory Agency
Main Actions
European Union
EU Commission
Ordered X to retain all Grok data until the end of 2026 and launched an investigation
Australia
eSafety Commissioner Office
Issued a warning and will take legal measures
United Kingdom
Regulatory Authority
Has issued a warning
India
Ministry of Electronics and Information Technology (MeitY)
Demanded X submit an action report, potentially threatening its “safe harbor” status
This synchronized global regulatory pressure indicates that Grok’s issues have become a shared concern among governments worldwide.
Potential Impact on xAI and X Platform
Conflict Between Regulatory Scrutiny and Business Prospects
Although xAI completed a $20 billion Series E funding round last year, with a valuation of $230 billion, regulatory scrutiny could significantly impact its commercial outlook. The EU has ordered data retention, and India has threatened to revoke its “safe harbor” status—both signs of potential service restrictions.
Clarification of Platform Responsibilities
The Australian eSafety Commissioner emphasized that, under Australian law, platforms must strengthen regulation of AI-generated content. This means X and xAI cannot evade responsibility by claiming “user-generated content.” Platforms are required to take direct responsibility for Grok’s feature settings, content moderation, and risk management.
Possibility of Feature Discontinuation
“Spicy Mode” has been deemed illegal by the EU. If other countries follow suit, Grok could face restrictions or even be forced offline. This would be a major blow to the X platform, which relies on Grok as a growth engine.
Future Development Directions
Based on current information, several paths may unfold:
xAI might be forced to overhaul Grok’s image generation features, including enhanced age verification, watermarks, and real-time content moderation
Multiple countries may introduce dedicated legislation on AI-generated content this year, clarifying platform responsibilities
X platform could face stricter regulatory scrutiny, with potential restrictions or removal in certain countries
Other AI companies might raise their content moderation standards to avoid similar risks
Summary
Grok faces not only a surge in complaints from Australia but a global regulatory storm. From the EU’s data retention orders to India’s “safe harbor” threats and Australia’s formal warning, multiple regulators have reached a consensus on this issue. xAI and the X platform need to recognize that technological innovation must be built on respecting ethical and legal boundaries. Otherwise, even substantial funding cannot compensate for the losses caused by regulatory risks. This incident also serves as a reminder to the entire AI industry that content moderation and risk control are not optional add-ons but essential components of product design.
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
Grok Faces Global Regulatory Storm: Complaints in Australia Double, Multiple Countries Act Simultaneously
The latest warning from Australian regulators has elevated the issue of image misuse faced by Grok from regional risk to an international focus. According to the latest reports, Australian eSafety Commissioner Julie Inman Grant stated that complaints about Grok generating non-consensual gendered images have doubled compared to the end of 2025, with some involving child pornography content. This is not an isolated incident in Australia but a concentrated outbreak of Grok’s feature abuse worldwide.
How Serious Is the Problem
Grok’s “Spicy Mode” Becomes a Hotspot
The built-in “Spicy Mode” feature of Grok is accused of directly fostering AI deepfake abuse. This feature allows users to generate more controversial content, including non-consensual sexualized images. According to the latest data, the number of sexually suggestive AI images published per hour by Grok is 84 times the combined total of the five major deepfake websites. This figure indicates that the scale of the problem has become completely out of control.
User Base Behind the Surge in Complaints
Grok has approximately 600 million monthly active users (combined on X platform and Grok app), which means the base for abuse is extremely large. While doubling complaints reflect the severity of the issue, it is more likely just the tip of the iceberg, as many victims may not have formally reported incidents.
Global Regulatory Cooperation
The Australian warning is not isolated but part of a coordinated response by multiple regulatory agencies:
This synchronized global regulatory pressure indicates that Grok’s issues have become a shared concern among governments worldwide.
Potential Impact on xAI and X Platform
Conflict Between Regulatory Scrutiny and Business Prospects
Although xAI completed a $20 billion Series E funding round last year, with a valuation of $230 billion, regulatory scrutiny could significantly impact its commercial outlook. The EU has ordered data retention, and India has threatened to revoke its “safe harbor” status—both signs of potential service restrictions.
Clarification of Platform Responsibilities
The Australian eSafety Commissioner emphasized that, under Australian law, platforms must strengthen regulation of AI-generated content. This means X and xAI cannot evade responsibility by claiming “user-generated content.” Platforms are required to take direct responsibility for Grok’s feature settings, content moderation, and risk management.
Possibility of Feature Discontinuation
“Spicy Mode” has been deemed illegal by the EU. If other countries follow suit, Grok could face restrictions or even be forced offline. This would be a major blow to the X platform, which relies on Grok as a growth engine.
Future Development Directions
Based on current information, several paths may unfold:
Summary
Grok faces not only a surge in complaints from Australia but a global regulatory storm. From the EU’s data retention orders to India’s “safe harbor” threats and Australia’s formal warning, multiple regulators have reached a consensus on this issue. xAI and the X platform need to recognize that technological innovation must be built on respecting ethical and legal boundaries. Otherwise, even substantial funding cannot compensate for the losses caused by regulatory risks. This incident also serves as a reminder to the entire AI industry that content moderation and risk control are not optional add-ons but essential components of product design.