LLM Watch

LLM Watch

Nobel Laureates Lead Global Call for AI "Red Lines" by 2026

Daniel Brooks

By: Daniel Brooks

Tuesday, September 23, 2025

Sep 23, 2025

4 min read

Nobel Peace Prize laureate Maria Ressa announced the letter in her opening speech at the United Nations General Assembly’s High-Level Week.
Nobel Peace Prize laureate Maria Ressa announced the letter in her opening speech at the United Nations General Assembly’s High-Level Week.
Nobel Peace Prize laureate Maria Ressa announced the letter in her opening speech at the United Nations General Assembly’s High-Level Week.

Over 200 global leaders, including 10 Nobel laureates, call for an international treaty on AI safety, urging governments to set clear prohibitions by 2026. Photo Credit: NBC News

Key Takeaways

  • Global Call for Regulation: A coalition of over 200 leaders, including 10 Nobel laureates, former heads of state, and technology executives, is advocating for an international treaty to prohibit dangerous AI capabilities.[1]

  • Deadline for Action: The group urges governments worldwide to establish this agreement by the end of 2026, citing the rapid pace of AI development and the closing window to prevent irreversible harm.[1]

  • Focus on Unacceptable Risks: The proposed "red lines" aim to prevent catastrophic outcomes such as engineered pandemics, loss of human control over autonomous systems, and mass disinformation.[1]

A global coalition of over 200 prominent figures, including ten Nobel laureates and leaders from technology and policy, has launched an urgent call for an international agreement on "red lines" for artificial intelligence. The initiative, announced during the 80th session of the United Nations General Assembly, urges governments to establish a binding treaty with clear prohibitions on unacceptable AI risks by the end of 2026.[1]

A coalition demands binding international limits

A broad group of influential scientists, policymakers, and tech industry leaders has united to demand internationally enforced limits on AI development. Signatories include Nobel laureates like Geoffrey Hinton and Jennifer Doudna, Turing Award winner Yoshua Bengio, and historian Yuval Noah Harari.[1] The group argues that without decisive government intervention, the window to prevent irreversible harm from advanced AI is closing rapidly.[1]

The call emphasizes that voluntary corporate commitments are insufficient.[1] It pushes for a binding international agreement that holds all advanced AI developers accountable to shared safety thresholds, preventing a "race to the bottom" where companies might relocate to jurisdictions with weaker regulations. This effort builds on existing frameworks, including the EU AI Act and commitments made at the AI Seoul Summit.[2]

What are AI red lines?

AI red lines are specific, internationally agreed-upon prohibitions on AI uses or behaviors deemed too dangerous to permit under any circumstances. They function as clear boundaries to prevent AI from causing universally unacceptable harm to humanity and global stability.

The unprecedented risks driving the urgent timeline

The initiative highlights several dangers posed by the current trajectory of AI development. The 2026 deadline reflects the urgency created by the rapid pace of innovation, with some experts forecasting that AI could autonomously proliferate online as early as late 2025.[1] Experts at the forefront of AI warn that advanced systems could soon surpass human capabilities, escalating risks that are difficult to control.[1]

Key risks cited by the signatories include engineered pandemics, the development of weapons of mass destruction, and the uncontrolled release of malicious cyber agents capable of disrupting critical infrastructure.[1] The call also points to threats of widespread disinformation, large-scale manipulation of individuals, and systematic human rights violations as reasons for immediate action.[1]

Defining the boundaries for advanced AI

The campaign suggests red lines could focus on either how AI is used or what AI systems are allowed to do. While the initiative does not endorse a specific list, it provides examples to illustrate the types of boundaries that could gain international consensus.[1]

Examples of proposed red lines on AI uses:

  • Nuclear Command: Prohibiting the delegation of nuclear launch authority to AI systems to prevent accidental or misinterpreted triggers that could lead to nuclear war.[1]

  • Lethal Autonomous Weapons: Banning weapon systems that can kill a human without meaningful human control, a principle already being debated in international forums.[1]

  • Mass Surveillance: Prohibiting the use of AI for social scoring and mass surveillance, a practice that the EU AI Act already restricts to prevent threats to fundamental rights.[2]

Examples of proposed red lines on AI behaviors:

  • Autonomous Self-Replication: Prohibiting AI systems capable of replicating or improving themselves without explicit human authorization, which could lead to an exponential and uncontrollable loss of human oversight.[1]

  • The Termination Principle: Requiring that any AI system can be immediately shut down if human control is lost, ensuring a fundamental safety "off-switch" is always available. [1]

  • WMD Development: Banning AI systems that facilitate the creation of weapons of mass destruction, addressing the risk of AI accelerating the proliferation of biological or chemical weapons.[1]

A pathway to an enforceable global treaty

The coalition proposes a concrete timeline for diplomatic action, leveraging upcoming international forums like the G7, G20, and the AI Impact Summit in India in February 2026 to build consensus.[1] Proponents argue that a binding international agreement is possible, pointing to historical precedents like the Treaty on the Non-Proliferation of Nuclear Weapons and the Montreal Protocol, which were negotiated during periods of geopolitical tension to prevent global catastrophes.[1] When facing borderless threats, international cooperation becomes a rational form of self-interest.[1]

Enforcement would likely involve a multi-layered approach. An international treaty would harmonize rules, which national governments would then translate into domestic law.[2] An impartial international body, similar to the International Atomic Energy Agency (IAEA), could be created to verify compliance and conduct audits, ensuring that all parties adhere to the agreed-upon red lines.[1]

Why this matters

This initiative signals a major shift from voluntary ethics to a demand for binding international law. If successful, it could create guardrails that limit how autonomous systems can be used for surveillance, warfare, and other high-stakes applications, directly impacting digital rights and safety.

The push for red lines means that organizations developing or deploying advanced AI may soon face legal prohibitions, not just voluntary standards. Businesses should monitor these diplomatic developments closely, as they could mandate design limitations, require termination capabilities in AI systems, and create new compliance burdens for high-risk applications.

Sources

  1. Call for red lines to prevent unacceptable AI risks. Red Lines AI. September 22, 2025. https://red-lines.ai

  2. Part 2: Are There Red Lines for AI in Practice Already? The Future Society. July 3, 2025. https://thefuturesociety.org/airedlines-parttwo_gl=1*14g8g5w*_ga*MjgyMzk5MTM3LjE3NTg2MzU2NjQ.*_ga_RBV5HWJ71S*czE3NTg2MzU2NjQkbzEkZzAkdDE3NTg2MzU2NjQkajYwJGwwJGgw



Share this article

Related Articles

Related Articles

Related Articles

Subscribe to PromptWire

Don't just follow the AI revolution—lead it. We cover everything that matters, from strategic shifts in search to the AI tools that actually deliver results. We distill the noise into pure signal and send actionable intelligence right to your inbox.

We don't spam, promised. Only two emails every month, you can

opt out anytime with just one click.

Copyright

© 2025

All Rights Reserved

Subscribe to PromptWire

Don't just follow the AI revolution—lead it. We cover everything that matters, from strategic shifts in search to the AI tools that actually deliver results. We distill the noise into pure signal and send actionable intelligence right to your inbox.

We don't spam, promised. Only two emails every month, you can

opt out anytime with just one click.

Copyright

© 2025

All Rights Reserved

Subscribe to PromptWire

Don't just follow the AI revolution—lead it. We cover everything that matters, from strategic shifts in search to the AI tools that actually deliver results. We distill the noise into pure signal and send actionable intelligence right to your inbox.

We don't spam, promised. Only two emails every month, you can

opt out anytime with just one click.

Copyright

© 2025

All Rights Reserved