Experts Debate Paths to Foster Responsible AI Development Amid Rising Concerns
January 22, 2026
Technology News

Experts Debate Paths to Foster Responsible AI Development Amid Rising Concerns

TIME Hosts Davos Roundtable Spotlighting Safety, Regulation, and Ethical AI Innovation

Summary

During a TIME-facilitated roundtable in Davos, thought leaders from technology and academia convened to examine how responsible AI practices might be employed to balance innovation with safeguarding public welfare. Discussions ranged from protecting children’s cognitive development and the intricacies of AI regulation, to the imperative of devising AI training methods that uphold human safety and ethical values.

Key Points

A TIME-hosted roundtable in Davos convened experts to discuss responsible AI innovation, focusing on safeguarding while promoting technological progress.
Jonathan Haidt recommended delaying children’s exposure to smartphones until at least high school to allow cognitive development enabling responsible technology use.
Yoshua Bengio proposed two main approaches to AI safety: designing AI with intrinsic safeguards and implementing governmental regulatory mechanisms such as mandatory liability insurance for developers.
Bengio noted that despite US-China AI competition, both countries share an interest in avoiding harmful AI applications and could collaborate internationally similar to past nuclear arms agreements.
Bill Ready highlighted concerns about AI and social media business models that exploit human vulnerabilities, describing Pinterest’s strategic shift to prioritize user outcomes over engagement metrics.
Experts emphasized designing AI systems that provide safety guarantees and align with human values, challenging current training methods that rely on extensive internet data reflecting negative human behaviors.
Yejin Choi suggested exploring AI architectures that learn ethics and morals from the outset rather than fixing misalignment post hoc.
Kay Firth-Butterfield underscored the need for comprehensive AI literacy campaigns to empower users and advocated for AI certification via widespread community engagement.

On January 21 in Davos, Switzerland, a diverse group of leaders hailing from the technology sector, academic institutions, and other fields gathered for an intensive dialogue on responsible artificial intelligence development. Convened by TIME CEO Jess Sibley, the roundtable aimed to explore frameworks and strategies that ensure AI evolves safely and ethically while continuing to foster innovation across industries.

The meeting covered a broad spectrum of issues, centering on topics such as AI’s influence on young minds, policy approaches to regulating the technology, and methodologies for refining AI training to prevent harms against humans.

Balancing Child Development with Technology Exposure

Jonathan Haidt, a professor specializing in ethical leadership at NYU Stern and author of The Anxious Generation, emphasized that instead of attempting to eliminate children’s exposure to AI and digital technologies, caregivers should prioritize cultivating healthy usage habits. He recommended delaying smartphone use until children reach at least high school age, suggesting that cognitive functions crucial for responsible interaction with technology develop sufficiently by that stage. "Let their brain develop, let them get executive function, then you can expose them," Haidt stated, underscoring the importance of timing in children's engagement with digital tools.

Scientific Insight and Regulatory Suggestions for AI Safety

Yoshua Bengio, a Université de Montreal professor and founder of the AI startup LawZero, highlighted the necessity of scientific understanding in overcoming AI-related challenges. Bengio, recognized as one of the pioneers of modern AI research, presented two primary strategies to mitigate risks. First, AI systems should be designed with embedded safety features to avoid adverse developmental effects on children. He noted that market demand could drive the creation of such safeguards.

Second, Bengio argued for an expanded role of government in regulating AI technologies. He proposed an innovative mechanism whereby liability insurers could become indirect regulators, requiring mandatory insurance for AI developers and deployers to oversee accountability effectively.

Global Coordination Beyond Competitive Tensions

The geopolitical dynamic, particularly the AI competition between the US and China, is often cited as a barrier to stringent regulation. However, Bengio expressed that both nations share interests in preventing harmful AI outcomes, such as threats to children’s safety or the misuse of AI for biological weapons or cyberattacks within their jurisdictions. Drawing parallels with Cold War arms control, he advocated for international cooperation that transcends competition to curb the emergence of dangerous AI applications.

Addressing Attention Economy and Ethical AI Use

Participants further examined analogies between current AI platforms and social media companies, especially regarding competition for users’ attention. Bill Ready, Pinterest’s CEO and sponsor of the event, criticized existing engagement-driven business models that often exploit negative human impulses, fostering division and conflict. He remarked, “We’re actually preying on the darkest aspects of the human psyche, and it doesn’t have to be that way.”

Under Ready’s leadership, Pinterest shifted its optimization strategy from maximizing view time to enhancing broader user outcomes, including activities beyond the platform itself. Though this approach initially yielded a decrease in short-term metrics, he noted the long-term effect was improved user retention and increased platform return rates.

Designing AI with Intrinsic Safety and Ethical Integrity

Bengio underscored the critical need to develop AI systems that inherently provide safety guarantees as models grow in scale and process larger data volumes. He suggested establishing sufficient training conditions to promote honesty and reliability within AI operations.

Adding to this discussion, Yejin Choi, a computer science professor and senior fellow at Stanford’s Institute for Human-Centered Artificial Intelligence, critiqued current AI training paradigms that inadvertently encourage misaligned behavior. She questioned the reliance on training large language models on vast internet data, which encompasses society’s worst elements, necessitating subsequent efforts to realign these models post-training.

Choi proposed the exploration of alternative AI architectures capable of intrinsically learning morals and human values from inception, rather than retrofitting ethical constraints.

Enhancing AI’s Role as a Human Tool Through Education and Certification

Responding to the inquiry regarding AI's potential to augment human capabilities, Kay Firth-Butterfield, CEO of the Good Tech Advisory, stressed the importance of engaging directly with AI users—be they workers, parents, or others—in the development process. She advocated for widespread AI literacy initiatives to empower public understanding and responsible use without exclusive reliance on institutional oversight.

Firth-Butterfield emphasized that broadening this educational outreach would facilitate meaningful conversations and enable effective certification mechanisms to ensure trustworthy AI deployment.

Additional Participants and Event Context

The TIME100 Roundtable featured other notable attendees including Matt Madrigal, Pinterest CTO; Matthew Prince, CEO of Cloudflare; Jeff Schumacher, Neurosymbolic AI Leader at EY-Parthenon; Navrina Singh, CEO of Credo AI; and Alexa Vignone, President for technology, media, telco, and consumer & business services at Salesforce. Salesforce CEO Marc Benioff also serves as TIME’s co-chair and owner.

The session, titled “TIME100 Roundtable: Ensuring AI For Good — Responsible Innovation at Scale,” was presented by Pinterest, providing a platform to deliberate constructive paths in artificial intelligence development amidst its rapid evolution and societal impact.

Risks
  • Excessive or premature exposure to AI and smartphones may hinder children’s cognitive development due to undeveloped executive function.
  • AI systems trained on broad internet data risk inheriting and perpetuating harmful biases and misaligned behaviors.
  • Competition between major countries like the US and China may impede coordinated efforts to regulate AI development responsibly.
  • Business models prioritizing user engagement may exploit negative psychological tendencies, fostering division and potentially harmful societal effects.
  • Lack of transparent and effective regulatory frameworks could allow unsafe AI applications to proliferate without adequate safeguards.
  • Insufficient AI literacy among the general public poses challenges for responsible use and monitoring of AI technologies.
  • Current AI training paradigms do not inherently instill moral or ethical values, requiring complex alignment efforts post-training.
  • Without mandatory liability insurance or regulatory mechanisms, accountability for harmful AI consequences may be limited.
Disclosure
Education only / not financial advice
Search Articles
Category
Technology News

Technology News

Related Articles
U.S. Risks Losing Edge in AI Innovation Due to Fragmented Regulation, Warns White House AI Coordinator

David Sacks, the White House AI and crypto coordinator, cautioned that the United States might fall ...

Zillow Faces Stock Decline Following Quarterly Earnings That Marginally Beat Revenue Expectations

Zillow Group Inc recent quarterly results reflect steady revenue growth surpassing sector averages b...

Treasury Secretary Highlights Urgency for Crypto Regulatory Clarity Amidst Coinbase Opposition

In light of recent fluctuations in cryptocurrency markets, U.S. Treasury Secretary Scott Bessent emp...

Coherent (COHR): Six‑Inch Indium Phosphide Moat — Tactical Long for AI Networking Upside

Coherent's vertical integration into six-inch indium phosphide (InP) wafers and optical modules posi...

Buy the Dip on AppLovin: High-Margin Adtech, Real Cash Flow — Trade Plan Inside

AppLovin (APP) just sold off on a CloudX / LLM narrative. The fundamentals — consecutive quarters ...

Oracle Shares Strengthen Amid Renewed Confidence in AI Sector Recovery

Oracle Corporation's stock showed notable gains as the software industry experiences a rebound, fuel...