The Trump administration has openly utilized artificial intelligence (AI)-generated images for online promotion, frequently incorporating cartoonish visuals and memes on official White House social media channels. However, recent deployment of a realistically edited image depicting civil rights attorney Nekima Levy Armstrong in tears following her arrest has sparked heightened concern regarding the administration's approach to distinguishing authentic content from manipulated imagery.
Homeland Security Secretary Kristi Noem’s social media posted the unaltered photograph of Levy Armstrong’s arrest. Shortly thereafter, the official White House account shared a modified version showing Armstrong appearing tearful, which became part of a wave of AI-altered images circulated across the political landscape after the fatal shootings of Renee Good and Alex Pretti by U.S. Border Patrol agents in Minneapolis.
While the administration embraces AI-edited content as a communication strategy, experts specializing in misinformation emphasize the risks this practice poses. They argue that the proliferation of AI-generated or modified images by credible official sources undermines the public’s ability to discern truth, ultimately fostering a climate of distrust toward government communications.
In light of criticism over the altered image of Levy Armstrong, White House officials have maintained their stance, with Deputy Communications Director Kaelan Dorr affirming on the platform X that the "memes will continue," signaling an ongoing commitment to this digital strategy. White House Deputy Press Secretary Abigail Jackson further dismissed detractors by sharing a post mocking the backlash.
Cornell University Professor David Rand, an information science expert, noted that labeling the edited Levy Armstrong image as a "meme" appears designed to frame it as humorous content akin to previously shared cartoons, likely as a defensive tactic against criticism of distributing manipulated media. Rand characterized the intentions behind disseminating this specific altered arrest image as more ambiguous compared to the administration’s prior cartoonish AI depictions.
Zach Henry, a Republican communications specialist and founder of an influencer marketing firm, observes that memes inherently involve nuanced messaging—humorous or informative to insiders but obscure to outsiders. AI-enhanced imagery represents the latest means by which the White House targets Trump’s online-engaged base. He explained that while a deeply internet-savvy audience recognizes such content as meme culture, less digitally immersed demographics might misinterpret realistic-looking images as factual, leading to inquiries and wider discussion that amplifies viral spread.
The emphasis on provoking strong emotional reactions enhances content virality, a tactic Henry credits generally to the White House social media team’s savvy.
Michael A. Spikes, a Northwestern University professor and media literacy researcher, emphasized that altered images from trusted government entities circumvent genuine representation and instead create constructed narratives perceived as real. He underscored that governments bear the responsibility to provide accurate and verified information, and sharing manipulated visuals jeopardizes essential trust in federal communications, signifying a troubling decline in public confidence.
Spikes views such actions as contributing to broader institutional crises marked by skepticism toward media and academic institutions. UCLA Professor Ramesh Srinivasan concurs, highlighting rising public uncertainty regarding reliable sources of information. Srinivasan warns that AI technologies will amplify these trust deficits and blur delineations of reality, truth, and evidence.
He further argued that official dissemination of synthetic content not only encourages everyday individuals to replicate similar posts but also legitimizes unlabeled synthetic content sharing by influential figures including policymakers. Given social media platforms’ algorithms often favor sensational or conspiratorial content—which AI tools can effortlessly produce—he flagged this as a profound emerging challenge.
The influx of AI-generated video content concerning Immigration and Customs Enforcement (ICE) activities—such as enforcement actions, protests, and citizen interactions—is already widespread on social media. Following the death of Renee Good at the hands of an ICE officer, numerous AI-created videos depicting women driving away from ICE personnel issued in social channels. Additional fabricated clips show orchestrated confrontations with ICE officers, featuring acts like shouting or food throwing.
Jeremy Carrasco, an expert in media literacy and viral AI content debunking, attributed most of these videos to "engagement farming" accounts aiming to monetize traffic through trending keywords like ICE. However, he noted these videos also attract viewers opposing ICE and DHS who may interpret them as "fan fiction," expressing hopes of authentic resistance. Carrasco expressed concern that the majority of viewers likely cannot distinguish fabricated footage from reality, raising issues about their discernment, especially when stakes are high.
Even overt signs of AI manipulation, such as nonsensical street signage, fail to guarantee viewers’ awareness of artificial generation, according to Carrasco. Although this challenge extends beyond immigration-related content—evidenced by the recent viral false images after Nicolás Maduro’s capture—it underscores a growing prevalence of AI-generated political media.
Carrasco identified media watermarking systems embedding origin data as a promising solution, noting ongoing development by the Coalition for Content Provenance and Authenticity. Nevertheless, he anticipates widespread adoption will take at least another year, suggesting that the problem is a persistent and escalating issue unlikely to abate soon.
This evolving landscape of AI-mediated political communication raises critical questions about the integrity of public information, the ethical use of emerging technologies by officials, and the implications for democratic discourse and trust.