Teen Advocate Develops Training to Combat Harm from Deepfake Nude Images
January 12, 2026
Business News

Teen Advocate Develops Training to Combat Harm from Deepfake Nude Images

A survivors’ initiative educates students, parents, and educators on non-consensual AI-generated sexual content and legal protections

Summary

At just 14 years old, Elliston Berry faced a distressing violation when a peer created and circulated deepfake nude images of her. Lacking guidance or support at the time, Berry has since collaborated with cybersecurity specialists and consultants to create an educational online course aimed at equipping students, parents, and school personnel with knowledge on identifying and addressing non-consensual explicit deepfakes. This initiative draws attention to a growing societal issue amid the rise of accessible AI technology and underscores recent legislative efforts designed to provide legal recourse and expedite removal of such harmful content from digital platforms.

Key Points

Elliston Berry, at 14, became a target of non-consensual deepfake nude imagery created by a classmate and experienced prolonged challenges in getting the images removed from social media.
Berry collaborated with Adaptive Security and Pathos Consulting Group to develop a 17-minute online training module focused on educating middle and high school students, parents, and school staff about deepfake abuse and sextortion.
The Take It Down Act, recently enacted and supported by Berry, criminalizes sharing explicit non-consensual images and requires platforms to remove such content within 48 hours of notification.
The training emphasizes both victim support and deterrence by informing potential perpetrators about legal consequences and the serious harm caused by such offenses.

In an era where artificial intelligence tools have become increasingly sophisticated and accessible, the misuse of such technology to create explicit deepfake images without consent has surfaced as a serious concern. Among those affected is Elliston Berry, who at the age of 14 endured the creation and circulation of a deepfake nude image made by a peer. Berry’s experience highlighted not only the personal trauma inflicted but also the systemic lack of information and support available for victims facing this form of harassment.

Following her ordeal at a Texas high school, Berry, now 16, has taken an active role in developing a resource aimed at preventing other young individuals from enduring the same distress. In partnership with Adaptive Security, a cybersecurity firm, and Pathos Consulting Group, she helped design an online training curriculum intended for students, educators, and parents. The program’s goal is to raise awareness of deepfake image abuse, including its identification and the avenues for aid and legal remedies.

The course, which requires approximately 17 minutes to complete, targets middle and high school communities. It covers topics such as how to recognize AI-generated deepfakes, the nature and consequences of deepfake sexual abuse, and sextortion. The latter refers to coercive manipulations where perpetrators deceive victims into sharing explicit images and subsequently threaten them, often for money or additional compromising content. This pernicious form of exploitation has impacted thousands of teenagers and has been linked to tragic outcomes, including suicide.

Legislative measures complement educational efforts. The Take It Down Act, signed into law in the previous year by President Donald Trump—a bill that Berry actively supported—criminalizes the dissemination of non-consensual explicit images, whether authentic or computer-generated. Importantly, the law mandates that online platforms must remove such imagery within 48 hours of being notified, a significant improvement over previous delays in content removal. Berry recounts that in her own case, it took nine months to have the deepfake images of her taken down from social media.

Berry also emphasized the initial confusion and lack of preparedness she encountered from school leadership during her experience. "They were more confused than we were, so they weren’t able to offer any comfort, any protection to us," she said. This gap in understanding motivated her to focus the training on educators to ensure that victims receive timely support and protection when they come forward.

Brian Long, CEO of Adaptive Security, underscored the importance of the training extending beyond potential victims. "It’s not just for the potential victims, but it’s also for the potential perpetuators of these types of crimes," he said. He cautioned that such actions are not harmless pranks but illegal and profoundly damaging behaviors.

The course further includes links to resources from organizations like RAINN, which provide support services. Additionally, it outlines the legal ramifications under the Take It Down Act and practical guidance on how to have abusive images removed from the internet.

Notably, incidents of deepfake abuse appear to be ongoing and frequent. Berry noted personally knowing several young women who have encountered this form of harassment in the recent month alone. For her and many others, the issue remains an urgent and frightening reality, especially given that awareness and understanding of the problem are still limited in many communities.

The training is freely available to educational institutions and parents, aiming to empower them with the tools needed to recognize, address, and prevent deepfake sexual abuse proactively. Through education and legal enforcement, advocates like Berry seek to foster safer environments for youth navigating an increasingly complex digital landscape.

Risks
  • Awareness and education gaps among school leadership may result in inadequate protection and support for victims of deepfake abuse.
  • The widespread availability and ease of AI tools facilitate the creation and dissemination of explicit deepfakes, increasing the incidence of this form of harassment.
  • Delays or failures by digital platforms to promptly remove non-consensual explicit content can prolong victim distress despite legal requirements.
  • Sextortion related to deepfake imagery poses severe psychological risks to teens, including instances that have led to suicide.
Disclosure
Education only / not financial advice
Search Articles
Category
Business News

Business News

Related Articles
Zillow Faces Stock Decline Following Quarterly Earnings That Marginally Beat Revenue Expectations

Zillow Group Inc recent quarterly results reflect steady revenue growth surpassing sector averages b...

Coherent (COHR): Six‑Inch Indium Phosphide Moat — Tactical Long for AI Networking Upside

Coherent's vertical integration into six-inch indium phosphide (InP) wafers and optical modules posi...

Buy the Dip on AppLovin: High-Margin Adtech, Real Cash Flow — Trade Plan Inside

AppLovin (APP) just sold off on a CloudX / LLM narrative. The fundamentals — consecutive quarters ...

Oracle Shares Strengthen Amid Renewed Confidence in AI Sector Recovery

Oracle Corporation's stock showed notable gains as the software industry experiences a rebound, fuel...

Figma Shares Climb as Analysts Predict Software Sector Recovery

Figma Inc's stock experienced a notable uptick amid a broader rally in software equities. Analysts a...

Charles Schwab Shares Slip Amid Industry Concerns Over AI-Driven Disruption

Shares of Charles Schwab Corp experienced a significant decline following the introduction of an AI-...