AI Has A Misogyny Problem, And The Latest Grok Situation Is Proof

AI platform Grok is reportedly being used for generating sexualised images of real women, including those of minors. The issue shows how fast AI ethics collapse without guardrails.

author-image
Sagalassis Kaur
New Update
Grok AI

Photograph: (Bloomberg.com)

Listen to this article
0.75x1x1.5x
00:00/ 00:00

Recently, an AI platform by Elon Musk's company, Grok, has been in the news because of its photo generation technology. The software built on X (formerly known as Twitter) is now used for generating sexualised images of real women, including those of minors.

Advertisment

Prompts like “put her in a transparent bikini” or "remove her clothes" were constantly used on the chatbot due to its newly launched so-called "spicy mode" producing sexualised images of real people. 

At the core of this controversy lies a profound question about AI ethics and consent. Unlike traditional sexual content issues, which can be regulated, AI can produce realistic images of people who never agreed to be depicted that way.

The Sudden Spread

The problem started last year when the said AI platform hosted on X, launched its "Spicy mode" through which users could generate sexual images using text prompts.

It became controversial when, last month, the platform permitted a large number of users to generate such images of real people who had posted their pictures on social media.

A nonprofit group called AI Forensics said in a report that it analysed 20,000 images generated by Grok between December 25, 2025, and January 1, 2026.

They found that 2% depicted a person who appeared to be 18 or younger, including 30 of 'young or very young women or girls', in bikinis or transparent clothes.

Advertisment

A growing list of countries and organisations are seeking a response from Elon Musk and the platform, which includes the EU ( European Union), India, the UK, Malaysia and more.

Company's Response

Several users complained about ‘offensive’ AI images to the Indian Ministry of Information and Technology (MeitY), and it directed the U.S.-based company to submit an action taken report (ATR) against such use of its technology within 72 hours of its direction against the offensive content generation.

In response, Musk made a vague response in an X post. On January 3, he replied that users using the platform’s AI services to make illegal content will face the same consequences as those uploading illegal content.

What Governments have to say

According to The Hindu, the European Union officials called the content 'illegal', warning that platforms allowing such outputs could violate the Digital Services Act.

The UK's communication regulator, Liz Kendall, said, "We cannot and will not allow the proliferation of these demeaning and degrading images, which are disproportionately aimed at women and girls."

Advertisment

In India, the Ministry of Electronics and IT, issued a notice to the platform X, demanding an explanation of the safeguards that were being put in place. 

The Paris prosecutor's office said they were "widening an ongoing investigation of X to include sexually explicit deepfakes after officials received complaints from lawmakers."

The Malaysian communications said they were "investigating X users who violated laws," prohibiting the circulation of "grossly offensive, obscene or indecent content."

This moment is not about one AI platform or tool. It’s about accountability and transparency towards civil society. If technological companies want the freedom to build powerful AI software, they should also accept responsibility when these systems fail to comply with decency and morality.

Otherwise, "innovation" becomes an excuse for exploitation.

Views expressed by the author are their own.

elon musk AI