Meta Platforms Inc., the parent company of widely used social media apps Instagram and WhatsApp, announced a temporary suspension of teenagers' ability to interact with its AI characters. According to a company blog post released on a recent Friday, this measure will take effect in the "coming weeks" and continue until an updated AI character experience is implemented. This restriction applies broadly to users who have provided Meta with birthdates indicating they are minors, as well as individuals identified through Meta's age prediction algorithms as likely teenagers, regardless of self-reported age.
The timing of Meta's decision occurs shortly before a scheduled trial in Los Angeles involving the company and other tech giants including TikTok and Google's YouTube, focusing on allegations related to the harmful impacts of these platforms on children.
While access to AI characters will be paused, the company's AI assistant service will remain accessible to teenage users. This marks a shift in Meta's approach to balancing innovation in AI-driven user experiences with concerns over child safety and responsible deployment.
The move to restrict AI character use among minors follows a broader trend within the technology sector, where apprehensions are mounting about the influence of AI conversational tools on younger audiences. For example, Character.AI, another company developing AI chatbot experiences, implemented a ban on teen users last fall amid ongoing scrutiny and lawsuits. Notably, the latter company faces legal challenges including claims from the parent of a teenager who allegedly was negatively impacted by interactions with its chatbots.
Meta's announcement underscores the heightened focus by social media and AI firms on enhancing content safety features and revising access policies in response to regulatory and public pressures concerning youth protection online.
The timing of Meta's decision occurs shortly before a scheduled trial in Los Angeles involving the company and other tech giants including TikTok and Google's YouTube, focusing on allegations related to the harmful impacts of these platforms on children.
While access to AI characters will be paused, the company's AI assistant service will remain accessible to teenage users. This marks a shift in Meta's approach to balancing innovation in AI-driven user experiences with concerns over child safety and responsible deployment.
The move to restrict AI character use among minors follows a broader trend within the technology sector, where apprehensions are mounting about the influence of AI conversational tools on younger audiences. For example, Character.AI, another company developing AI chatbot experiences, implemented a ban on teen users last fall amid ongoing scrutiny and lawsuits. Notably, the latter company faces legal challenges including claims from the parent of a teenager who allegedly was negatively impacted by interactions with its chatbots.
Meta's announcement underscores the heightened focus by social media and AI firms on enhancing content safety features and revising access policies in response to regulatory and public pressures concerning youth protection online.