Grok AI model still generating sexualized content
Digest more
“Nudify” apps and websites, which produce sexualized deepfake images of real people using generative AI, are not a new phenomenon. But Grok – which is free, has looser restrictions than other chatbots, is marketed as “anti-woke,” and is seamlessly integrated into X – has pushed the practice into the mainstream.
Elon Musk’s AI chatbot is making misleading claims after being blasted for nonconsensual sexual images of users
In response to the Grok "remove clothes" trend, X said it would remove illegal content and ban users generating it with the AI tool.
Elon Musk’s chatbot has been used to generate thousands of sexualized images of adults and apparent minors. Apple and Google have removed other “nudify” apps—but continue to host X and Grok.
Victims, experts and campaigners warn that the misuse of X's AI chatbot, Grok, is eroding consent online and exposing serious ethical failures. From altered images to violated autonomy, the growing misuse of Grok is forcing a reckoning over safety,
Grok, X's AI chatbot, generates about 6,700 sexually suggestive images per hour — roughly 85 times more than the five largest alternative platforms combined. Victims report their complaints are dismissed by X's moderation system.
As authorities call for xAI to be called to account for the 'behaviour' of Grok, Jonathan McCrea asks if it is truly time to demand better from Big Tech.
AI image tool Grok has produced sexually explicit manipulation of the body of Minneapolis ICE shooting victim Renée Nicole Good, prompting renewed debate over AI ethics.
Vice President JD Vance and Fox News’ Sean Hannity gushed over Grok, the AI property owned by Elon Musk, during an interview that aired on Hannity’s show Thursday night. Hannity kicked off the tangent by asking Vance: “Are you as obsessed with AI as ...