File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: AI deception: A survey of examples, risks, and potential solutions

TitleAI deception: A survey of examples, risks, and potential solutions
Authors
Issue Date10-May-2024
PublisherCell Press
Citation
Patterns, 2024, v. 5, n. 5 How to Cite?
AbstractThis paper argues that a range of current AI systems have learned how to deceive humans. We define deception as the systematic inducement of false beliefs in the pursuit of some outcome other than the truth. We first survey empirical examples of AI deception, discussing both special-use AI systems (including Meta's CICERO) and general-purpose AI systems (including large language models). Next, we detail several risks from AI deception, such as fraud, election tampering, and losing control of AI. Finally, we outline several potential solutions: first, regulatory frameworks should subject AI systems that are capable of deception to robust risk-assessment requirements; second, policymakers should implement bot-or-not laws; and finally, policymakers should prioritize the funding of relevant research, including tools to detect AI deception and to make AI systems less deceptive. Policymakers, researchers, and the broader public should work proactively to prevent AI deception from destabilizing the shared foundations of our society.
Persistent Identifierhttp://hdl.handle.net/10722/348741

 

DC FieldValueLanguage
dc.contributor.authorPark, Peter S-
dc.contributor.authorGoldstein, Simon-
dc.contributor.authorO'Gara, Aidan-
dc.contributor.authorChen, Michael-
dc.contributor.authorHendrycks, Dan-
dc.date.accessioned2024-10-15T00:30:32Z-
dc.date.available2024-10-15T00:30:32Z-
dc.date.issued2024-05-10-
dc.identifier.citationPatterns, 2024, v. 5, n. 5-
dc.identifier.urihttp://hdl.handle.net/10722/348741-
dc.description.abstractThis paper argues that a range of current AI systems have learned how to deceive humans. We define deception as the systematic inducement of false beliefs in the pursuit of some outcome other than the truth. We first survey empirical examples of AI deception, discussing both special-use AI systems (including Meta's CICERO) and general-purpose AI systems (including large language models). Next, we detail several risks from AI deception, such as fraud, election tampering, and losing control of AI. Finally, we outline several potential solutions: first, regulatory frameworks should subject AI systems that are capable of deception to robust risk-assessment requirements; second, policymakers should implement bot-or-not laws; and finally, policymakers should prioritize the funding of relevant research, including tools to detect AI deception and to make AI systems less deceptive. Policymakers, researchers, and the broader public should work proactively to prevent AI deception from destabilizing the shared foundations of our society.-
dc.languageeng-
dc.publisherCell Press-
dc.relation.ispartofPatterns-
dc.rightsThis work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.-
dc.titleAI deception: A survey of examples, risks, and potential solutions-
dc.typeArticle-
dc.identifier.doi10.1016/j.patter.2024.100988-
dc.identifier.scopuseid_2-s2.0-85192326345-
dc.identifier.volume5-
dc.identifier.issue5-
dc.identifier.eissn2666-3899-
dc.identifier.issnl2666-3899-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats