File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Language agents reduce the risk of existential catastrophe

TitleLanguage agents reduce the risk of existential catastrophe
Authors
KeywordsExistential risk
Goal misgeneralization
Interpretable AI
Language agents
Reward misspecification
Issue Date2023
Citation
AI and Society, 2023 How to Cite?
AbstractRecent advances in natural-language processing have given rise to a new kind of AI architecture: the language agent. By repeatedly calling an LLM to perform a variety of cognitive tasks, language agents are able to function autonomously to pursue goals specified in natural language and stored in a human-readable format. Because of their architecture, language agents exhibit behavior that is predictable according to the laws of folk psychology: they function as though they have desires and beliefs, and then make and update plans to pursue their desires given their beliefs. We argue that the rise of language agents significantly reduces the probability of an existential catastrophe due to loss of control over an AGI. This is because the probability of such an existential catastrophe is proportional to the difficulty of aligning AGI systems, and language agents significantly reduce that difficulty. In particular, language agents help to resolve three important issues related to aligning AIs: reward misspecification, goal misgeneralization, and uninterpretability.
Persistent Identifierhttp://hdl.handle.net/10722/336392
ISSN
2023 Impact Factor: 2.9
2023 SCImago Journal Rankings: 0.976
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorGoldstein, Simon-
dc.contributor.authorKirk-Giannini, Cameron Domenico-
dc.date.accessioned2024-01-15T08:26:28Z-
dc.date.available2024-01-15T08:26:28Z-
dc.date.issued2023-
dc.identifier.citationAI and Society, 2023-
dc.identifier.issn0951-5666-
dc.identifier.urihttp://hdl.handle.net/10722/336392-
dc.description.abstractRecent advances in natural-language processing have given rise to a new kind of AI architecture: the language agent. By repeatedly calling an LLM to perform a variety of cognitive tasks, language agents are able to function autonomously to pursue goals specified in natural language and stored in a human-readable format. Because of their architecture, language agents exhibit behavior that is predictable according to the laws of folk psychology: they function as though they have desires and beliefs, and then make and update plans to pursue their desires given their beliefs. We argue that the rise of language agents significantly reduces the probability of an existential catastrophe due to loss of control over an AGI. This is because the probability of such an existential catastrophe is proportional to the difficulty of aligning AGI systems, and language agents significantly reduce that difficulty. In particular, language agents help to resolve three important issues related to aligning AIs: reward misspecification, goal misgeneralization, and uninterpretability.-
dc.languageeng-
dc.relation.ispartofAI and Society-
dc.subjectExistential risk-
dc.subjectGoal misgeneralization-
dc.subjectInterpretable AI-
dc.subjectLanguage agents-
dc.subjectReward misspecification-
dc.titleLanguage agents reduce the risk of existential catastrophe-
dc.typeArticle-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1007/s00146-023-01748-4-
dc.identifier.scopuseid_2-s2.0-85168370049-
dc.identifier.eissn1435-5655-
dc.identifier.isiWOS:001050722400002-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats