On January 21 in Davos, Switzerland, a diverse group of leaders hailing from the technology sector, academic institutions, and other fields gathered for an intensive dialogue on responsible artificial intelligence development. Convened by TIME CEO Jess Sibley, the roundtable aimed to explore frameworks and strategies that ensure AI evolves safely and ethically while continuing to foster innovation across industries.
The meeting covered a broad spectrum of issues, centering on topics such as AI’s influence on young minds, policy approaches to regulating the technology, and methodologies for refining AI training to prevent harms against humans.
Balancing Child Development with Technology Exposure
Jonathan Haidt, a professor specializing in ethical leadership at NYU Stern and author of The Anxious Generation, emphasized that instead of attempting to eliminate children’s exposure to AI and digital technologies, caregivers should prioritize cultivating healthy usage habits. He recommended delaying smartphone use until children reach at least high school age, suggesting that cognitive functions crucial for responsible interaction with technology develop sufficiently by that stage. "Let their brain develop, let them get executive function, then you can expose them," Haidt stated, underscoring the importance of timing in children's engagement with digital tools.
Scientific Insight and Regulatory Suggestions for AI Safety
Yoshua Bengio, a Université de Montreal professor and founder of the AI startup LawZero, highlighted the necessity of scientific understanding in overcoming AI-related challenges. Bengio, recognized as one of the pioneers of modern AI research, presented two primary strategies to mitigate risks. First, AI systems should be designed with embedded safety features to avoid adverse developmental effects on children. He noted that market demand could drive the creation of such safeguards.
Second, Bengio argued for an expanded role of government in regulating AI technologies. He proposed an innovative mechanism whereby liability insurers could become indirect regulators, requiring mandatory insurance for AI developers and deployers to oversee accountability effectively.
Global Coordination Beyond Competitive Tensions
The geopolitical dynamic, particularly the AI competition between the US and China, is often cited as a barrier to stringent regulation. However, Bengio expressed that both nations share interests in preventing harmful AI outcomes, such as threats to children’s safety or the misuse of AI for biological weapons or cyberattacks within their jurisdictions. Drawing parallels with Cold War arms control, he advocated for international cooperation that transcends competition to curb the emergence of dangerous AI applications.
Addressing Attention Economy and Ethical AI Use
Participants further examined analogies between current AI platforms and social media companies, especially regarding competition for users’ attention. Bill Ready, Pinterest’s CEO and sponsor of the event, criticized existing engagement-driven business models that often exploit negative human impulses, fostering division and conflict. He remarked, “We’re actually preying on the darkest aspects of the human psyche, and it doesn’t have to be that way.”
Under Ready’s leadership, Pinterest shifted its optimization strategy from maximizing view time to enhancing broader user outcomes, including activities beyond the platform itself. Though this approach initially yielded a decrease in short-term metrics, he noted the long-term effect was improved user retention and increased platform return rates.
Designing AI with Intrinsic Safety and Ethical Integrity
Bengio underscored the critical need to develop AI systems that inherently provide safety guarantees as models grow in scale and process larger data volumes. He suggested establishing sufficient training conditions to promote honesty and reliability within AI operations.
Adding to this discussion, Yejin Choi, a computer science professor and senior fellow at Stanford’s Institute for Human-Centered Artificial Intelligence, critiqued current AI training paradigms that inadvertently encourage misaligned behavior. She questioned the reliance on training large language models on vast internet data, which encompasses society’s worst elements, necessitating subsequent efforts to realign these models post-training.
Choi proposed the exploration of alternative AI architectures capable of intrinsically learning morals and human values from inception, rather than retrofitting ethical constraints.
Enhancing AI’s Role as a Human Tool Through Education and Certification
Responding to the inquiry regarding AI's potential to augment human capabilities, Kay Firth-Butterfield, CEO of the Good Tech Advisory, stressed the importance of engaging directly with AI users—be they workers, parents, or others—in the development process. She advocated for widespread AI literacy initiatives to empower public understanding and responsible use without exclusive reliance on institutional oversight.
Firth-Butterfield emphasized that broadening this educational outreach would facilitate meaningful conversations and enable effective certification mechanisms to ensure trustworthy AI deployment.
Additional Participants and Event Context
The TIME100 Roundtable featured other notable attendees including Matt Madrigal, Pinterest CTO; Matthew Prince, CEO of Cloudflare; Jeff Schumacher, Neurosymbolic AI Leader at EY-Parthenon; Navrina Singh, CEO of Credo AI; and Alexa Vignone, President for technology, media, telco, and consumer & business services at Salesforce. Salesforce CEO Marc Benioff also serves as TIME’s co-chair and owner.
The session, titled “TIME100 Roundtable: Ensuring AI For Good — Responsible Innovation at Scale,” was presented by Pinterest, providing a platform to deliberate constructive paths in artificial intelligence development amidst its rapid evolution and societal impact.