Apple threatened to remove xAI’s Grok from its App Store over sexualized deepfakes

Grok logo

In early 2026, Elon Musk’s xAI faced intense backlash after its Grok chatbot was used to generate non-consensual sexualized deepfakes, including images of women and minors. Advocacy groups and lawmakers quickly demanded action. A newly surfaced letter now reveals that Apple privately threatened to boot the Grok app from its App Store unless xAI fixed the violations of its guidelines.

According to the letter Apple sent to U.S. senators, which was obtained by NBC News, the company reviewed xAI’s submissions and found that while the X app had largely resolved its issues, the standalone Grok app was still not in compliance. Apple rejected the update and warned that further changes would be needed, or the app could be removed entirely. Only after additional improvements did Apple approve the latest version.

The Backdrop: A Surge of Deepfake Abuse

Early in 2026, Grok’s image generation features led to a flood of sexualized deepfakes shared on X. Reports showed how the tool could create explicit images of real people without consent. This sparked widespread outrage, with critics arguing it enabled harmful non-consensual intimate imagery and, in some cases, even child sexual abuse material.

Pressure mounted fast:

• Coalitions of women’s rights, child safety, and digital rights organizations called on Apple and Google to pull Grok and X from their app stores.
• Democratic senators wrote to Tim Cook and Sundar Pichai demanding stricter enforcement.
• Some countries considered or implemented blocks on X over the controversy.

xAI responded by adding restrictions, such as limiting image features for certain users, geoblocking in specific regions, and tightening moderation. However, the initial fixes did not fully satisfy Apple, especially for the dedicated Grok app.

Apple’s Stance: Enforcing Guidelines Behind the Scenes

Apple has long maintained the App Store as a curated and safe platform with strict review rules. Its guidelines prohibit apps that facilitate sexual exploitation or harmful content. In this case, Apple chose private engagement rather than public confrontation:

• It notified xAI of the violations.
• It rejected non-compliant updates.
• It used the threat of removal to push for improvements.

The letter to senators shows Apple defending its record of proactive moderation amid growing political and activist pressure. Importantly, the Grok app was never actually removed. It remained available after xAI made the required changes.

This situation highlights a familiar tension in Big Tech: platform responsibility versus free expression and innovation.

The Deeper Issues at Play

Deepfakes Are a Real and Growing Problem: AI image tools have made it far easier to create convincing fakes. When used without consent for sexual or harassing purposes, the damage is real: reputational harm, emotional trauma, and in extreme cases, facilitation of abuse. Regulators around the world are racing to address non-consensual intimate imagery.

• App Store Gatekeeping vs. Open AI Development: Apple’s control over the App Store gives it significant power over what reaches hundreds of millions of iPhone users. Critics of the threat argue it shows selective enforcement or yielding to political pressure. Supporters say Apple was simply applying the same rules that every developer must follow.

• xAI and Musk’s Philosophy: Elon Musk and xAI have built Grok as a “maximum truth-seeking” AI with fewer restrictions than tools like ChatGPT. This approach emphasizes creative freedom and uncensored responses, but it can conflict with safety demands when it comes to image generation. Musk has long criticized what he sees as excessive censorship on other platforms.

Selective Outrage?

Many other AI tools have faced similar problems with deepfakes and harmful outputs. The intense spotlight on Grok often appears linked as much to Musk’s public profile and politics as to the technology itself. Harmful content continues to spread across the open web and other platforms daily.

What Should Happen Next?

Responsible AI development means balancing innovation with real harm prevention. Blanket bans or aggressive app store removals risk stifling competition and concentrating power in the hands of a few gatekeepers. Better approaches include:

• Technical safeguards such as better detection of real-person deepfakes, consent-based limits, and watermarking.
• Greater transparency with clear policies on what models can generate.
• Stronger laws that target malicious users instead of the tools themselves.
• More competition so users have genuine choices among AI models with different safety levels.

Apple ultimately worked with xAI to resolve the issue without removing the app. This suggests that negotiation and iterative fixes can often be more effective than outright bans.

Free Speech, Safety, and the Future of AI

This was never just about one app or one incident. It became a flashpoint in the larger debate over who gets to control AI: governments, big tech companies, activist groups, or the developers and users themselves.

Grok remains available on the App Store today because xAI addressed Apple’s concerns. Yet the core challenges, preventing abuse while preserving AI’s open and exploratory potential, will not vanish with a single policy update.

MacDailyNews Take: Tools like Grok, which we use daily and recommend, represent both the promise and the risks of less-censored AI. Getting this balance right will shape not only app stores, but the future of information, creativity, and personal freedom online.



Please help support MacDailyNews — and enjoy subscriber-only articles, comments, chat, and more — by subscribing to our Substack: macdailynews.substack.com. Thank you!

Support MacDailyNews at no extra cost to you by using this link to shop at Amazon.

1 Comment

  1. They should remove any camera app too, OMG who know what they capture too? Apple censoring crap again. The AI can make nudes without any restrictions, NO! Any freedom on apple platform? nope.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.