Moroccan MP urges govt to combat AI-generated sexual images

Ezzoumi formally questions the government's strategy to address the misuse of generative AI tools for creating non-consensual sexual images, particularly those victimizing women and underage girls.

RABAT -  Moroccan parliamentarian Khadija Ezzoumi has formally questioned the government's strategy to address the misuse of generative AI tools for creating non-consensual sexual images, particularly those victimizing women and underage girls, in a move highlighting growing concerns over the dark side of artificial intelligence. 

Ezzoumi, a member of Morocco's House of Representatives, directed her written inquiry to Amal El Fallah Seghrouchni, the Minister of Digital Transition and Administrative Reform. 

In her question, Ezzoumi emphasized how the swift proliferation of AI technologies has outstripped current regulatory frameworks, thereby endangering individuals' privacy, dignity, and control over their own likenesses. 

She specifically referenced tools like Grok, integrated into Elon Musk's X platform, which have been implicated in generating sexualized depictions of real people without consent, labeling this as a manifestation of gender-based digital violence.

The lawmaker's concerns stem from a broader global surge in AI-facilitated deepfakes. 

According to reports, sexually explicit deepfakes constitute the vast majority of such content online, with a 2023 study by Home Security Heroes identifying over 95,000 deepfake videos, 98% of which were pornographic and 99% targeting women. 

This issue has escalated to include synthetic child sexual abuse material, posing significant challenges for law enforcement worldwide.

Ezzoumi's query specifically asks what preventive, technical, and legislative measures the Moroccan government intends to implement to curb this emerging threat. 

While the minister's response is pending, the inquiry arrives amid heightened international scrutiny of AI ethics and safety.

Globally, similar alarms have prompted action. In the United States, the National Center for Missing and Exploited Children (NCMEC) documented 4,700 reports in 2023 related to AI-generated child sexual abuse and exploitative content. 

Europe saw a coordinated Europol operation in February 2025 leading to 25 arrests across 19 countries for AI-produced child sexual abuse material. 

The United Kingdom, meanwhile, advanced legislation in 2025 to criminalize the creation of sexually explicit deepfakes of adults and the use of AI for child sexual imagery.

This concern in Morocco reflects a worldwide push for accountability in AI development. 

Advocacy groups argue that without robust safeguards, generative technologies risk amplifying harm, especially to vulnerable populations. Ezzoumi's intervention underscores the urgent need for balanced innovation that prioritizes human rights as AI continues to evolve.

Experts suggest that effective responses could include watermarking AI-generated content, enhancing detection algorithms, and enacting laws that hold platforms liable for hosted material. 

Morocco, with its ongoing digital transformation initiatives, stands at a crossroads where proactive policy could set a regional precedent.

The government has yet to issue an official statement on Ezzoumi's query, but observers anticipate it could spark broader parliamentary discussions on digital ethics in the kingdom.