In a significant development on Thursday, Paris Hilton publicly endorsed bipartisan legislative efforts at the U.S. Capitol focusing on protecting individuals from abuses related to deepfake technology. Hilton's involvement underscores the increasing urgency surrounding the regulation of AI-generated explicit imagery, specifically those produced or disseminated without consent.
Hilton appeared alongside Representatives Alexandria Ocasio-Cortez, a Democrat from New York, and Laurel Lee, a Republican from Florida, as they jointly promoted the Disrupt Explicit Forged Images and Non-Consensual Edits Act—commonly referred to as the DEFIANCE Act. This legislative proposal directly addresses the creation and circulation of nonconsensual deepfake pornography, a growing concern driven by advances in artificial intelligence capabilities.
Describing her advocacy work in Washington as deeply meaningful, Hilton remarked to ITK, "Coming here to the Capitol, to D.C., and doing my advocacy work—it's truly been the most meaningful work of my life." She further expressed satisfaction in using her public profile to illuminate critical causes requiring attention, underlining her commitment to this specific issue.
The DEFIANCE Act seeks to provide legal recourse for victims of AI-generated explicit images by enabling them to pursue civil suits against individuals responsible for producing or distributing such content. Notably, this initiative has garnered bipartisan support and aims to advance toward a House floor vote. Advocates are currently urging Speaker Mike Johnson of Louisiana, a Republican, to prioritize the bill following its unanimous consent passage in the Senate.
Hilton's involvement is notably informed by her own personal experience with nonconsensual intimate imagery. She recounted that at 19 years old, a private and intimate video of hers was disseminated publicly without her approval. Hilton characterized this violation not as a "scandal"—a term some used at the time—but rather as an unequivocal form of abuse. She emphasized that the legal system offered inadequate protections when such incidents occurred, leaving victims like her feeling vulnerable and disempowered.
Highlighting the broader implications, Hilton noted the chilling effect these abuses have on women's online presence and overall freedom, stating, "Too many women are afraid to exist online, or sometimes to exist at all." Her remarks underscore the societal and psychological ramifications of unchecked deepfake abuses and the necessity for structured legal protections.
The issue addressed by the DEFIANCE Act extends beyond any single technology platform and has come into sharper focus recently amid renewed concerns surrounding Elon Musk's Grok chatbot. The chatbot has drawn criticism this month for generating sexualized images upon user prompts, sparking public outcry and intensifying calls for strengthened legislative frameworks.
The Senate's unanimous consent approval of the DEFIANCE Act earlier this month marked a crucial step toward expanding protections for victims of AI-driven explicit content. The bill proposes broadening existing legal mechanisms by specifically targeting the behaviors of creating, distributing, and soliciting nonconsensual sexual deepfake material. Importantly, it also establishes a private right of action, enabling victims to seek civil remedies independently.
Complementing the DEFIANCE Act is the TAKE IT DOWN Act, a related piece of legislation enacted last year. This law mandates online platforms to swiftly remove child sexual abuse content and nonconsensual intimate images within 48 hours of a complaint by the victim. Among its advocates was former First Lady Melania Trump, who played a notable role in pushing the legislation forward.
Together, these legislative efforts represent a growing recognition by lawmakers of the need to address complex issues at the intersection of emerging AI technologies and individual privacy rights. Paris Hilton’s advocacy illustrates the personal and societal stakes involved, bridging celebrity influence and policy action in a bid to secure more effective tools to protect victims of deepfake abuse.