Meta Faces Serious Allegations Over Handling of Youth Safety on Instagram
November 22, 2025
Technology News

Meta Faces Serious Allegations Over Handling of Youth Safety on Instagram

Newly Unsealed Court Documents Reveal Claims of Neglect and Misleading Practices Regarding Teen User Risks

Summary

Court documents unsealed in a major lawsuit reveal damaging allegations against Meta concerning its management of Instagram, particularly regarding the safety and well-being of young users. The filings include testimony from former employees and company research suggesting that despite awareness of risks such as sex trafficking, mental health harm, and harmful content exposure, Meta prioritized growth over necessary safety changes. The case joins over 1,800 plaintiffs accusing multiple social media platforms of neglecting children's mental and physical health for profit.

Key Points

Instagram maintained a high "strike" threshold, allowing up to 16 sex trafficking violations before account suspension, according to testimony from a former safety executive.
Meta faced allegations of concealing internal research showing links between Instagram usage and increased rates of anxiety and depression in teenage users.
The company reportedly delayed or resisted safety feature implementations, such as making all teen accounts private by default, due to concerns about user engagement and growth metrics.
Instagram’s platform allowed easy reporting of minor violations like spam but did not provide a straightforward mechanism to report child sexual abuse content.
Internal audits revealed large numbers of potentially inappropriate adult-to-teen interactions facilitated by the platform, which increased significantly over time, especially after features like Instagram Reels were introduced.
Meta aggressively targeted youth users, including efforts to increase teen engagement during school hours, despite knowing that many users under 13 were accessing the platform against company policy.
Several features designed to reduce harmful social comparison and body image issues, such as hiding likes and controlling beauty filters, were either not implemented or rolled back due to negative impacts on growth metrics.
AI tools identified harmful content but frequently did not automatically remove it, allowing risky material related to self-harm, eating disorders, and child exploitation to remain accessible to minors.

Recent unsealing of court filings from a widespread lawsuit against major social media companies has brought to light significant accusations aimed at Meta concerning its Instagram platform. Central to these allegations is the claim that Meta tolerated and inadequately addressed sex trafficking on its platform and that it systematically neglected user safety, particularly among minors, to prioritize growth and engagement metrics.

The lawsuit involves more than 1,800 plaintiffs, including minors, their parents, educational districts, and state attorneys general. It asserts that the parent companies behind Instagram, TikTok, Snapchat, and YouTube pursued aggressive expansion strategies at the expense of children's physical and mental health.

One of the plaintiffs’ briefs, recently unsealed by the U.S. District Court for the Northern District of California, draws heavily on sworn testimony from current and former Meta executives, internal communications, and company research obtained during discovery. Due to court restrictions, full access to the underlying documents remains limited.

At the heart of the brief is a deposition from Vaishnavi Jayakumar, Instagram’s former head of safety and well-being, who joined Meta in 2020. She expressed shock upon learning that Instagram employed a notably high "strike" threshold concerning accounts involved in sex trafficking. According to her testimony, the company operated a policy permitting up to sixteen violations related to prostitution and sexual solicitation before suspending the offending account, a figure she described as "very, very high" compared to industry standards. This policy was reportedly corroborated by internal corporate documentation included in the filings.

Furthermore, the brief alleges that Instagram lacked an accessible mechanism for users to report child sexual abuse material, despite publicly advertising a zero-tolerance stance on such content. Jayakumar reportedly raised concerns about the difficulty in addressing child sexual abuse content, but her efforts were met with resistance. Paradoxically, the platform allowed users to report less severe infractions such as spam or promotion of firearms more readily.

These allegations are accompanied by claims that Meta knowingly misled the public and lawmakers about the potential harms Instagram and Facebook pose to teenagers' mental health. Internal research recognized increased rates of anxiety and depression among teens who frequently used these platforms. Notably, a deactivation study conducted by Meta found that teens who ceased using the platforms for a week experienced reductions in anxiety, loneliness, and depression. However, this study was allegedly discontinued and not disclosed publicly, with company representatives reportedly attributing this decision to concerns about bias influenced by negative media narratives.

When questioned by the Senate Judiciary Committee in 2020 regarding correlations between teenage girls' increased platform use and rising depression and anxiety, Meta responded negatively, which plaintiffs interpret as an intentional concealment of findings that might expose the company's role in these harms.

Other internal debates included the company's failure to adopt recommended safety features. For instance, research teams suggested that setting all teen accounts to private by default could significantly reduce harmful interactions between adults and minors. Nonetheless, executive teams delayed or resisted implementing these changes, ostensibly due to fears of diminished engagement and product growth.

According to court documents, this inaction contributed to a dramatic increase in inappropriate adult-teen interactions on Instagram, amplifying risks to vulnerable youth. Specific platform features, such as Instagram Reels, reportedly exacerbated issues by enabling teens to broadcast videos to broad audiences, including adult strangers.

Meta’s approach to user safety products extended to content moderation technology. Plaintiffs present evidence claiming that even when AI-based tools identified content violations with high confidence, posts related to child sexual exploitation, eating disorders, or self-harm were not consistently removed automatically. This lack of prompt content elimination allowed harmful material to persist and be accessible to teen users.

Moreover, the company is accused of aggressively marketing to younger demographics, with internal documents revealing deliberate goals to increase "teen time spent" and acquire new teen users. This strategy allegedly included outreach to school districts and timed push notifications targeting students during class. Despite policies prohibiting users under age 13, internal data suggested millions of children younger than thirteen accessed Instagram, raising questions about enforcement of age restrictions.

The plaintiffs’ brief also highlights Meta’s internal struggles with product features that could mitigate mental health risks but conflicted with business interests. An example includes the experimental hiding of "likes" on posts, intended to reduce social comparison anxiety. Although tests indicated the potential benefit of this feature, it was ultimately dropped due to adverse effects on key performance metrics. Similarly, beauty filters, known to affect body image negatively, were briefly removed but reintroduced after concerns about growth impact.

On responding to these serious allegations, a Meta spokesperson strongly denied the claims, characterizing the court filings as selective and misleading. The company stated it has consistently made changes to enhance teen safety, including launching Instagram Teen Accounts in 2024, which default minors aged 13 to 18 into private settings with limited content exposure and parental oversight. Meta also emphasized ongoing efforts to combat inappropriate content through advanced detection technologies supplemented by human review.

Nonetheless, the lawsuit and accompanying filings portray a company that has balanced safety initiatives against growth ambitions, often erring toward the latter. Internal testimonies cited in the brief paint a picture of executives and growth teams prioritizing engagement metrics over the well-being of younger users.

The lawsuit remains ongoing, with the court maintaining certain documents under seal and ongoing debates about public access to pertinent records. The defendants’ actions and eventual judicial findings in this multidistrict litigation are likely to have significant implications for how tech companies manage youth safety and transparency moving forward.

Risks
  • Continued exposure of minors to harmful content and inappropriate interactions due to delayed or insufficient safety measures.
  • Potential for increased legal and regulatory scrutiny of Meta's youth safety practices, impacting company reputation and operations.
  • Possible erosion of user and public trust in Meta owing to allegations of misleading disclosures to Congress and withholding research findings.
  • Challenges balancing product growth objectives against implementing safety features that may reduce user engagement and revenue.
  • Ongoing litigation risk with unknown financial and operational consequences tied to multidistrict lawsuits involving multiple plaintiffs and allegations.
Disclosure
Education only / not financial advice
Search Articles
Category
Technology News

Technology News

Related Articles
Buy the Numbers, Not the Noise: A Tactical Long on META After a Tax-Driven Q3 Slip

Meta's underlying ad business and cash generation remain strong despite an anomalous tax charge that...