Why Are Grok and X Still Available in App Stores?

0
25

Save StorySave this storySave StorySave this story

Elon Musk’s AI chatbot Grok is being used to flood X with thousands of sexualized images of adults and apparent minors wearing minimal clothing. Some of this content appears to not only violate X’s own policies, which prohibit sharing illegal content such as child sexual abuse material (CSAM), but may also violate the guidelines of Apple’s App Store and the Google Play store.

Apple and Google both explicitly ban apps containing CSAM, which is illegal to host and distribute in many countries. The tech giants also forbid apps that contain pornographic material or facilitate harassment. The Apple App Store says it doesn’t allow “overtly sexual or pornographic material,” as well as “defamatory, discriminatory, or mean-spirited content,” especially if the app is “likely to humiliate, intimidate, or harm a targeted individual or group.” The Google Play store bans apps that “contain or promote content associated with sexually predatory behavior, or distribute non-consensual sexual content,” as well as programs that “contain or facilitate threats, harassment, or bullying.”

Over the past two years, Apple and Google removed a number of “nudify” and AI image-generation apps after investigations by the BBC and 404 Media found they were being advertised or used to effectively turn ordinary photos into explicit images of women without their consent.

But at the time of publication, both the X app and the stand-alone Grok app remain available in both app stores. Apple, Google, and X did not respond to requests for comment. Grok is operated by Musk’s multibillion-dollar artificial intelligence startup xAI, which also did not respond to questions from WIRED. In a public statement published on January 3, X said that it takes action against illegal content on its platform, including CSAM. “Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content,” the company warned.

Sloan Thompson, the director of training and education at EndTAB, a group that teaches organizations how to prevent the spread of nonconsensual sexual content, says it is “absolutely appropriate” for companies like Apple and Google to take action against X and Grok.

The amount of nonconsensual explicit images on X generated by Grok has exploded over the past two weeks. One researcher told Bloomberg that over a 24-hour period between January 5 and 6, Grok was producing roughly 6,700 images every hour that they identified as “sexually suggestive or nudifying.” Another analyst collected more than 15,000 URLs of images that Grok created on X during a two-hour period on December 31. WIRED reviewed approximately one-third of the images, and found that many of them featured women dressed in revealing clothing. Over 2,500 were marked as no longer available within a week, while almost 500 were labeled as “age-restricted adult content.”

Earlier this week, a spokesperson for the European Commission, the governing body of the European Union, publicly condemned the sexually explicit and nonconsensual images being generated by Grok on X as “illegal” and “appalling,” telling Reuters that such content “has no place in Europe.”

On Thursday, the EU ordered X to retain all internal documents and data relating to Grok until the end of 2026, extending a prior retention directive, to ensure authorities can access materials relevant to compliance with the EU’s Digital Services Act, though a new formal investigation has yet to be announced. Regulators in other countries, including the UK, India, and Malaysia have also said they are investigating the social media platform.

Grok and X are part of a multimillion-dollar industry peddling “nudify” services online. Over the past few years, dozens of stand-alone apps and websites have popped up that promise to digitally strip women without their consent, often marketing themselves as harmless novelty tools while enabling image-based sexual abuse. Mainstream AI companies have also struggled to prevent their tools from being used to generate nonconsensual sexualized imagery. For example, WIRED reported last month that people were sharing tips online about how to get Google and OpenAI’s generative AI chatbots, Gemini and ChatGPT, to alter pictures of women to depict them wearing bikinis and other revealing clothing.

Lawmakers in the US and other countries have begun cracking down on nonconsensual AI deepfakes. Last year, President Donald Trump signed the TAKE IT DOWN Act, which makes it a federal crime to knowingly publish or host nonconsensual sexual images. But Thomspon says the law is limited by the fact that companies are only required to begin the removal process after a victim chooses to come forward.

“Private companies have a lot more agency in responding to things quickly,” Thompson says. “When we talk about other tools for addressing image-based abuse—lawsuits take time, and it takes time for laws to be passed, especially right now, when we have technologies that are hitting the market at a breakneck pace. It's very, very difficult for laws to be passed at the same pace.”

David Greene, a civil liberties director at the Electronic Frontier Foundation, says people should be cautious about the idea of removing entire platforms from app stores. He emphasizes that X and xAI both have the power to combat this problem themselves.

Greene argues that Musk’s companies could put in place better technical safeguards to deter users from creating deepfakes and other kinds of sexualized imagery. They “might not be a perfect fix, but might at least add some friction to the process,” he adds

Thompson agrees that companies like X and xAI should be subject to more public pressure to prevent these sorts of photos and videos from being created in the first place. “That's where I think we need intervention,” she says.