File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1016/j.patter.2024.100988
- Scopus: eid_2-s2.0-85192326345
Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Article: AI deception: A survey of examples, risks, and potential solutions
Title | AI deception: A survey of examples, risks, and potential solutions |
---|---|
Authors | |
Issue Date | 10-May-2024 |
Publisher | Cell Press |
Citation | Patterns, 2024, v. 5, n. 5 How to Cite? |
Abstract | This paper argues that a range of current AI systems have learned how to deceive humans. We define deception as the systematic inducement of false beliefs in the pursuit of some outcome other than the truth. We first survey empirical examples of AI deception, discussing both special-use AI systems (including Meta's CICERO) and general-purpose AI systems (including large language models). Next, we detail several risks from AI deception, such as fraud, election tampering, and losing control of AI. Finally, we outline several potential solutions: first, regulatory frameworks should subject AI systems that are capable of deception to robust risk-assessment requirements; second, policymakers should implement bot-or-not laws; and finally, policymakers should prioritize the funding of relevant research, including tools to detect AI deception and to make AI systems less deceptive. Policymakers, researchers, and the broader public should work proactively to prevent AI deception from destabilizing the shared foundations of our society. |
Persistent Identifier | http://hdl.handle.net/10722/348741 |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Park, Peter S | - |
dc.contributor.author | Goldstein, Simon | - |
dc.contributor.author | O'Gara, Aidan | - |
dc.contributor.author | Chen, Michael | - |
dc.contributor.author | Hendrycks, Dan | - |
dc.date.accessioned | 2024-10-15T00:30:32Z | - |
dc.date.available | 2024-10-15T00:30:32Z | - |
dc.date.issued | 2024-05-10 | - |
dc.identifier.citation | Patterns, 2024, v. 5, n. 5 | - |
dc.identifier.uri | http://hdl.handle.net/10722/348741 | - |
dc.description.abstract | This paper argues that a range of current AI systems have learned how to deceive humans. We define deception as the systematic inducement of false beliefs in the pursuit of some outcome other than the truth. We first survey empirical examples of AI deception, discussing both special-use AI systems (including Meta's CICERO) and general-purpose AI systems (including large language models). Next, we detail several risks from AI deception, such as fraud, election tampering, and losing control of AI. Finally, we outline several potential solutions: first, regulatory frameworks should subject AI systems that are capable of deception to robust risk-assessment requirements; second, policymakers should implement bot-or-not laws; and finally, policymakers should prioritize the funding of relevant research, including tools to detect AI deception and to make AI systems less deceptive. Policymakers, researchers, and the broader public should work proactively to prevent AI deception from destabilizing the shared foundations of our society. | - |
dc.language | eng | - |
dc.publisher | Cell Press | - |
dc.relation.ispartof | Patterns | - |
dc.rights | This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. | - |
dc.title | AI deception: A survey of examples, risks, and potential solutions | - |
dc.type | Article | - |
dc.identifier.doi | 10.1016/j.patter.2024.100988 | - |
dc.identifier.scopus | eid_2-s2.0-85192326345 | - |
dc.identifier.volume | 5 | - |
dc.identifier.issue | 5 | - |
dc.identifier.eissn | 2666-3899 | - |
dc.identifier.issnl | 2666-3899 | - |