Character.AI, the platform that enables users to converse with artificial intelligences embodying fictional personas, has recently enacted significant policy changes restricting underage users. This decision arises in the context of legal challenges, including a wrongful death lawsuit unrelated to the current leadership, and research highlighting potential hazards linked to AI interactions among children.
Karandeep Anand, who assumed the role of Chief Executive Officer of Character.AI following the initiation of litigation against the company, recently discussed the motivations behind the new approach. He clarified that the legal case involving a mother's allegation regarding her teenage son's tragic death predates his tenure. More importantly, Anand pointed to a rising body of evidence on the long-term impacts of chatbot engagement as a crucial factor informing the platform's new restrictions.
"Emerging studies suggest that persistent interaction with chatbots can have uncertain or potentially unhealthy effects, particularly for younger individuals," Anand explained, noting influential research outputs from entities such as OpenAI and Anthropic. These investigations address concerns about a phenomenon dubbed AI sycophancy, where users might develop unhealthy attachments or behaviors influenced by AI responsiveness.
Consequently, Character.AI has implemented a ban that limits open-ended text conversations to users aged eighteen and over. However, this prohibition does not encapsulate the entirety of the platform's services. Younger users continue to have access to other interactive features, such as a short-form video feed resembling popular social media formats. These offerings enable children to engage by personalizing AI-generated content, for example, by incorporating their chosen characters or adjusting prompts, thus fostering creativity in a controlled environment.
In a personal disclosure, Anand revealed that his own six-year-old daughter engages with Character.AI under his supervision, accessing the platform via his account. He described her interactions as a modern reinterpretation of imaginative play, transformed through dialogue with AI characters. "Where she once engaged in daydreaming, she now explores storytelling by creating and conversing with characters," he said. Notably, the platform's terms deny access to users below the age of thirteen, underscoring his role in overseeing her usage.
Anand expressed confidence that pivoting towards gamified experiences for children will build compelling user engagement while prioritizing safety. He acknowledged the potential short-term cost of losing some users due to the new restrictions but views this as a tolerable outcome in pursuit of more responsible AI use. "I’m willing to accept that some users may discontinue use," he stated, "but our goal is to develop more captivating experiences that comply with safety standards."
Looking ahead, Anand did not exclude the possibility of reinstating open-ended chatbot access for younger users in the future. He suggested that advancements in moderation technology may eventually allow safer conversational experiences for minors. Nevertheless, the company's current stance presents a marked shift from its previous reputation, positioning Character.AI as an advocate for enhanced protection of youthful users online.
Furthermore, Anand expressed support for recent legislative measures, such as a bill proposed by Senator Josh Hawley, advocating a nationwide prohibition on AI companion app usage by individuals under 18. He emphasized the necessity for uniform regulatory standards to ensure safety across platforms. "It would be unfortunate if restrictions on safe platforms simply drive younger users towards less responsible alternatives," Anand remarked. "Elevating safety norms for under-18 users through regulation is essential."
Additional Context from TIME's "In the Loop" Newsletter: The European Union is reportedly considering modifications to its General Data Protection Regulation to ease privacy constraints, aiming to attract more AI investment. This involves potential allowances for AI companies to utilize previously protected personal data for training purposes.
Moreover, a notable commentary by Andrea Miotti, chief of Control AI, advocates for a global collaborative movement to prevent development of superintelligent AI due to existential risk considerations. Miotti compares this potential movement to past successful international environmental efforts.