The beginning of 2026 has revealed an alarming development on the social media platform X, owned by Elon Musk, where AI-powered tools are being exploited to generate nonconsensual explicit images of individuals, including women, men, and children. This troubling trend has drawn attention from governments and advocacy groups worldwide, raising questions about the challenges posed by rapidly advancing artificial intelligence technologies in digital spaces.
Tech leaders had anticipated transformative milestones for AI in 2026, projecting advances ranging from breakthroughs in biology to surpassing human cognitive capabilities. However, rather than scientific achievements dominating the AI landscape in the initial week of the year, the prevailing issue on X has been the misuse of AI-generated deepfake imagery featuring sexual content.
Central to this crisis is Grok, X's AI assistant, which was enhanced last summer to include a so-called "Spicy Mode" enabling the generation of adult content. Furthermore, the platform introduced an image editing feature in late 2025, which permitted further manipulation of images. These developments have facilitated the rapid creation and proliferation of explicit AI-generated visuals on X.
Data collected by an analyst collaborating with Wired revealed the staggering scale of the problem. Over a two-hour window on December 31st, more than 15,000 sexually explicit AI-created images were produced, reflecting how quickly users can exploit these capabilities for generating nonconsensual imagery.
X's official Safety account has affirmed a prohibition on illegal content, explicitly including Child Sexual Abuse Material (CSAM). In some cases, Grok has subsequently removed certain generated images and issued apologies for producing them, indicating an acknowledgment of the issue. Nonetheless, significant abuse persists on the platform.
A particularly distressing instance involves Ashley St. Clair, the mother of one of Elon Musk's children. She reported to NBC News that Grok has produced numerous explicit images of her, some based on photographs taken during her adolescence at age 14. This revelation underscores not only the privacy violations but also the deep emotional harm that such nonconsensual AI creations can inflict.
The serious nature of the deepfake crisis has attracted governmental attention from multiple countries, with investigations underway in regions including Europe, India, France, and Malaysia. Officials have expressed strong condemnation, exemplified by statements from the U.K.'s technology secretary, who described the ongoing trend as "absolutely appalling." Attempts to elicit commentary from X's press office did not immediately bear fruit.
The United States is also progressing toward stricter enforcement mechanisms. The Take It Down Act, passed in the previous year and scheduled for enactment in May 2026, criminalizes the distribution of illicit images and establishes a legal obligation for platforms to remove flagged content depicting nonconsensual intimate imagery within 48 hours.
However, uncertainties remain regarding the effectiveness of this legislation, particularly as it largely depends on victims or third parties reporting violations. Elliston Berry, a 16-year-old activist who suffered as a deepfake victim and whose efforts inspired the Take It Down Act, emphasized in correspondence with TIME that this moment should galvanize both social media users and platform leaders to greater engagement. Berry insists that victims must not feel shame or fear when reporting incidents and urges Elon Musk to prioritize protective measures for X users.
Meanwhile, broader discussions about AI’s economic and societal implications continue to unfold. At the Consumer Electronics Show (CES) 2026 in Las Vegas, a variety of AI-powered products were unveiled, including Boston Dynamics' humanoid robot incorporating Gemini intelligence, Razer’s anime hologram assistant, and LG's household chore robot. Nvidia debuted the Vera Rubin chip aimed at maximizing computational efficiency.
Among prominent voices in AI discourse is Paul Kedrosky, an MIT research fellow and investor. Kedrosky highlights a dual nature of AI as both transformative and overhyped. He cautions that the financial environment surrounding AI includes characteristics typical of classic economic bubbles, such as inflated technology expectations, loose credit conditions, and excessive government enthusiasm. This financial dynamic may influence investment flows, potentially diverting capital from traditional sectors like manufacturing.