In an era where artificial intelligence tools have become increasingly sophisticated and accessible, the misuse of such technology to create explicit deepfake images without consent has surfaced as a serious concern. Among those affected is Elliston Berry, who at the age of 14 endured the creation and circulation of a deepfake nude image made by a peer. Berry’s experience highlighted not only the personal trauma inflicted but also the systemic lack of information and support available for victims facing this form of harassment.
Following her ordeal at a Texas high school, Berry, now 16, has taken an active role in developing a resource aimed at preventing other young individuals from enduring the same distress. In partnership with Adaptive Security, a cybersecurity firm, and Pathos Consulting Group, she helped design an online training curriculum intended for students, educators, and parents. The program’s goal is to raise awareness of deepfake image abuse, including its identification and the avenues for aid and legal remedies.
The course, which requires approximately 17 minutes to complete, targets middle and high school communities. It covers topics such as how to recognize AI-generated deepfakes, the nature and consequences of deepfake sexual abuse, and sextortion. The latter refers to coercive manipulations where perpetrators deceive victims into sharing explicit images and subsequently threaten them, often for money or additional compromising content. This pernicious form of exploitation has impacted thousands of teenagers and has been linked to tragic outcomes, including suicide.
Legislative measures complement educational efforts. The Take It Down Act, signed into law in the previous year by President Donald Trump—a bill that Berry actively supported—criminalizes the dissemination of non-consensual explicit images, whether authentic or computer-generated. Importantly, the law mandates that online platforms must remove such imagery within 48 hours of being notified, a significant improvement over previous delays in content removal. Berry recounts that in her own case, it took nine months to have the deepfake images of her taken down from social media.
Berry also emphasized the initial confusion and lack of preparedness she encountered from school leadership during her experience. "They were more confused than we were, so they weren’t able to offer any comfort, any protection to us," she said. This gap in understanding motivated her to focus the training on educators to ensure that victims receive timely support and protection when they come forward.
Brian Long, CEO of Adaptive Security, underscored the importance of the training extending beyond potential victims. "It’s not just for the potential victims, but it’s also for the potential perpetuators of these types of crimes," he said. He cautioned that such actions are not harmless pranks but illegal and profoundly damaging behaviors.
The course further includes links to resources from organizations like RAINN, which provide support services. Additionally, it outlines the legal ramifications under the Take It Down Act and practical guidance on how to have abusive images removed from the internet.
Notably, incidents of deepfake abuse appear to be ongoing and frequent. Berry noted personally knowing several young women who have encountered this form of harassment in the recent month alone. For her and many others, the issue remains an urgent and frightening reality, especially given that awareness and understanding of the problem are still limited in many communities.
The training is freely available to educational institutions and parents, aiming to empower them with the tools needed to recognize, address, and prevent deepfake sexual abuse proactively. Through education and legal enforcement, advocates like Berry seek to foster safer environments for youth navigating an increasingly complex digital landscape.