File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Will AI and humanity go to war?

TitleWill AI and humanity go to war?
Authors
KeywordsAI safety
Focal points
Information failures
Power shifts
The bargaining model
Issue Date17-Jul-2025
PublisherSpringer
Citation
AI and Society, 2025 How to Cite?
AbstractThis paper offers the first careful analysis of the possibility that AI and humanity will go to war. The paper focuses on the case of artificial general intelligence, AI with broadly human capabilities. The paper uses a bargaining model of war to apply standard causes of war to the special case of AI/human conflict. The paper argues that information failures and commitment problems are especially likely in AI/human conflict. Information failures would be driven by the difficulty of measuring AI capabilities, by the uninterpretability of AI systems, and by differences in how AIs and humans analyze information. Commitment problems would make it difficult for AIs and humans to strike credible bargains. Commitment problems could arise from power shifts, rapid and discontinuous increases in AI capabilities. Commitment problems could also arise from missing focal points, where AIs and humans fail to effectively coordinate on policies to limit war. In the face of this heightened chance of war, the paper proposes several interventions. War can be made less likely by improving the measurement of AI capabilities, capping improvements in AI capabilities, designing AI systems to be similar to humans, and by allowing AI systems to participate in democratic political institutions.
Persistent Identifierhttp://hdl.handle.net/10722/366473
ISSN
2023 Impact Factor: 2.9
2023 SCImago Journal Rankings: 0.976

 

DC FieldValueLanguage
dc.contributor.authorGoldstein, Simon-
dc.date.accessioned2025-11-25T04:19:36Z-
dc.date.available2025-11-25T04:19:36Z-
dc.date.issued2025-07-17-
dc.identifier.citationAI and Society, 2025-
dc.identifier.issn0951-5666-
dc.identifier.urihttp://hdl.handle.net/10722/366473-
dc.description.abstractThis paper offers the first careful analysis of the possibility that AI and humanity will go to war. The paper focuses on the case of artificial general intelligence, AI with broadly human capabilities. The paper uses a bargaining model of war to apply standard causes of war to the special case of AI/human conflict. The paper argues that information failures and commitment problems are especially likely in AI/human conflict. Information failures would be driven by the difficulty of measuring AI capabilities, by the uninterpretability of AI systems, and by differences in how AIs and humans analyze information. Commitment problems would make it difficult for AIs and humans to strike credible bargains. Commitment problems could arise from power shifts, rapid and discontinuous increases in AI capabilities. Commitment problems could also arise from missing focal points, where AIs and humans fail to effectively coordinate on policies to limit war. In the face of this heightened chance of war, the paper proposes several interventions. War can be made less likely by improving the measurement of AI capabilities, capping improvements in AI capabilities, designing AI systems to be similar to humans, and by allowing AI systems to participate in democratic political institutions.-
dc.languageeng-
dc.publisherSpringer-
dc.relation.ispartofAI and Society-
dc.rightsThis work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.-
dc.subjectAI safety-
dc.subjectFocal points-
dc.subjectInformation failures-
dc.subjectPower shifts-
dc.subjectThe bargaining model-
dc.titleWill AI and humanity go to war?-
dc.typeArticle-
dc.identifier.doi10.1007/s00146-025-02460-1-
dc.identifier.scopuseid_2-s2.0-105011065207-
dc.identifier.eissn1435-5655-
dc.identifier.issnl0951-5666-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats