Recent unsealing of court filings from a widespread lawsuit against major social media companies has brought to light significant accusations aimed at Meta concerning its Instagram platform. Central to these allegations is the claim that Meta tolerated and inadequately addressed sex trafficking on its platform and that it systematically neglected user safety, particularly among minors, to prioritize growth and engagement metrics.
The lawsuit involves more than 1,800 plaintiffs, including minors, their parents, educational districts, and state attorneys general. It asserts that the parent companies behind Instagram, TikTok, Snapchat, and YouTube pursued aggressive expansion strategies at the expense of children's physical and mental health.
One of the plaintiffs’ briefs, recently unsealed by the U.S. District Court for the Northern District of California, draws heavily on sworn testimony from current and former Meta executives, internal communications, and company research obtained during discovery. Due to court restrictions, full access to the underlying documents remains limited.
At the heart of the brief is a deposition from Vaishnavi Jayakumar, Instagram’s former head of safety and well-being, who joined Meta in 2020. She expressed shock upon learning that Instagram employed a notably high "strike" threshold concerning accounts involved in sex trafficking. According to her testimony, the company operated a policy permitting up to sixteen violations related to prostitution and sexual solicitation before suspending the offending account, a figure she described as "very, very high" compared to industry standards. This policy was reportedly corroborated by internal corporate documentation included in the filings.
Furthermore, the brief alleges that Instagram lacked an accessible mechanism for users to report child sexual abuse material, despite publicly advertising a zero-tolerance stance on such content. Jayakumar reportedly raised concerns about the difficulty in addressing child sexual abuse content, but her efforts were met with resistance. Paradoxically, the platform allowed users to report less severe infractions such as spam or promotion of firearms more readily.
These allegations are accompanied by claims that Meta knowingly misled the public and lawmakers about the potential harms Instagram and Facebook pose to teenagers' mental health. Internal research recognized increased rates of anxiety and depression among teens who frequently used these platforms. Notably, a deactivation study conducted by Meta found that teens who ceased using the platforms for a week experienced reductions in anxiety, loneliness, and depression. However, this study was allegedly discontinued and not disclosed publicly, with company representatives reportedly attributing this decision to concerns about bias influenced by negative media narratives.
When questioned by the Senate Judiciary Committee in 2020 regarding correlations between teenage girls' increased platform use and rising depression and anxiety, Meta responded negatively, which plaintiffs interpret as an intentional concealment of findings that might expose the company's role in these harms.
Other internal debates included the company's failure to adopt recommended safety features. For instance, research teams suggested that setting all teen accounts to private by default could significantly reduce harmful interactions between adults and minors. Nonetheless, executive teams delayed or resisted implementing these changes, ostensibly due to fears of diminished engagement and product growth.
According to court documents, this inaction contributed to a dramatic increase in inappropriate adult-teen interactions on Instagram, amplifying risks to vulnerable youth. Specific platform features, such as Instagram Reels, reportedly exacerbated issues by enabling teens to broadcast videos to broad audiences, including adult strangers.
Meta’s approach to user safety products extended to content moderation technology. Plaintiffs present evidence claiming that even when AI-based tools identified content violations with high confidence, posts related to child sexual exploitation, eating disorders, or self-harm were not consistently removed automatically. This lack of prompt content elimination allowed harmful material to persist and be accessible to teen users.
Moreover, the company is accused of aggressively marketing to younger demographics, with internal documents revealing deliberate goals to increase "teen time spent" and acquire new teen users. This strategy allegedly included outreach to school districts and timed push notifications targeting students during class. Despite policies prohibiting users under age 13, internal data suggested millions of children younger than thirteen accessed Instagram, raising questions about enforcement of age restrictions.
The plaintiffs’ brief also highlights Meta’s internal struggles with product features that could mitigate mental health risks but conflicted with business interests. An example includes the experimental hiding of "likes" on posts, intended to reduce social comparison anxiety. Although tests indicated the potential benefit of this feature, it was ultimately dropped due to adverse effects on key performance metrics. Similarly, beauty filters, known to affect body image negatively, were briefly removed but reintroduced after concerns about growth impact.
On responding to these serious allegations, a Meta spokesperson strongly denied the claims, characterizing the court filings as selective and misleading. The company stated it has consistently made changes to enhance teen safety, including launching Instagram Teen Accounts in 2024, which default minors aged 13 to 18 into private settings with limited content exposure and parental oversight. Meta also emphasized ongoing efforts to combat inappropriate content through advanced detection technologies supplemented by human review.
Nonetheless, the lawsuit and accompanying filings portray a company that has balanced safety initiatives against growth ambitions, often erring toward the latter. Internal testimonies cited in the brief paint a picture of executives and growth teams prioritizing engagement metrics over the well-being of younger users.
The lawsuit remains ongoing, with the court maintaining certain documents under seal and ongoing debates about public access to pertinent records. The defendants’ actions and eventual judicial findings in this multidistrict litigation are likely to have significant implications for how tech companies manage youth safety and transparency moving forward.