Apple Pressured xAI Over Grok AI Content Violations, Threatened App Store Removal

Apple Inc. privately warned xAI that its Grok chatbot risked removal from the App Store earlier this year after the tool was found generating sexualised deepfake images, according to a letter obtained by NBC News.

The disclosure sheds light on how the iPhone maker handled the controversy behind closed doors, even as it remained publicly silent. The dispute centred on Grok, an artificial intelligence chatbot integrated with X (formerly Twitter), which users reportedly exploited to create manipulated images of women and minors without consent.

Apple said it contacted both X and Grok developers after receiving complaints and monitoring media reports about misuse of the technology. The company concluded that the apps had breached its App Store policies and instructed developers to submit plans to strengthen content moderation.

According to the letter, xAI initially responded by submitting an updated version of the Grok app. However, Apple rejected the revision, stating that the changes did not sufficiently address the violations. The company warned that failure to comply could result in the chatbot being removed entirely from the App Store.

Subsequent submissions were reviewed separately. Apple said that while the X app had largely resolved its issues, Grok remained out of compliance. The chatbot was rejected again, with Apple reiterating that further improvements were required before it could remain available to users.

Only after additional revisions and ongoing discussions did Apple approve a later version of the Grok app, saying the updated submission had made meaningful progress in addressing the concerns. The process helps explain a series of moderation changes introduced by xAI during the backlash, including tighter restrictions on image generation tools and limits on editing photos of real individuals.

Despite those measures, concerns about the chatbot’s safeguards persist. A recent NBC News report found that users were still able to bypass restrictions and generate inappropriate images, raising questions about the effectiveness of current controls.

The episode highlights growing scrutiny of artificial intelligence platforms and the challenges technology companies face in policing misuse. As AI tools become more advanced and widely accessible, regulators and platform operators are under increasing pressure to ensure that safeguards keep pace with the risks.

Apple’s intervention also signals a tougher stance on enforcement within its ecosystem, particularly when apps are found to violate rules סביב harmful or non-consensual content.