File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1007/s44204-025-00246-2
- Scopus: eid_2-s2.0-85219733714
- Find via

Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Article: AI wellbeing
| Title | AI wellbeing |
|---|---|
| Authors | |
| Keywords | AI Belief Desire Desire satisfactionism Experientialism Propositional attitudes Wellbeing |
| Issue Date | 1-Feb-2025 |
| Citation | Asian Journal of Philosophy, 2025, v. 4, n. 1 How to Cite? |
| Abstract | Under what conditions would an artificially intelligent system have wellbeing? Despite its clear bearing on the ethics of human interactions with artificial systems, this question has received little direct attention. Because all major theories of wellbeing hold that an individual’s welfare level is partially determined by their mental life, we begin by considering whether artificial systems have mental states. We show that a wide range of theories of mental states, when combined with leading theories of wellbeing, predict that certain existing artificial systems have wellbeing. Along the way, we argue that there are good reasons to believe that artificial systems can have wellbeing even if they are not phenomenally conscious. While we do not claim to demonstrate conclusively that AI systems have wellbeing, we argue that there is a significant probability that some AI systems have or will soon have wellbeing and that this should lead us to reassess our relationship with the intelligent systems we create. |
| Persistent Identifier | http://hdl.handle.net/10722/366833 |
| ISSN |
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Goldstein, Simon | - |
| dc.contributor.author | Kirk-Giannini, Cameron Domenico | - |
| dc.date.accessioned | 2025-11-26T02:50:25Z | - |
| dc.date.available | 2025-11-26T02:50:25Z | - |
| dc.date.issued | 2025-02-01 | - |
| dc.identifier.citation | Asian Journal of Philosophy, 2025, v. 4, n. 1 | - |
| dc.identifier.issn | 2731-4642 | - |
| dc.identifier.uri | http://hdl.handle.net/10722/366833 | - |
| dc.description.abstract | Under what conditions would an artificially intelligent system have wellbeing? Despite its clear bearing on the ethics of human interactions with artificial systems, this question has received little direct attention. Because all major theories of wellbeing hold that an individual’s welfare level is partially determined by their mental life, we begin by considering whether artificial systems have mental states. We show that a wide range of theories of mental states, when combined with leading theories of wellbeing, predict that certain existing artificial systems have wellbeing. Along the way, we argue that there are good reasons to believe that artificial systems can have wellbeing even if they are not phenomenally conscious. While we do not claim to demonstrate conclusively that AI systems have wellbeing, we argue that there is a significant probability that some AI systems have or will soon have wellbeing and that this should lead us to reassess our relationship with the intelligent systems we create. | - |
| dc.language | eng | - |
| dc.relation.ispartof | Asian Journal of Philosophy | - |
| dc.rights | This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. | - |
| dc.subject | AI | - |
| dc.subject | Belief | - |
| dc.subject | Desire | - |
| dc.subject | Desire satisfactionism | - |
| dc.subject | Experientialism | - |
| dc.subject | Propositional attitudes | - |
| dc.subject | Wellbeing | - |
| dc.title | AI wellbeing | - |
| dc.type | Article | - |
| dc.identifier.doi | 10.1007/s44204-025-00246-2 | - |
| dc.identifier.scopus | eid_2-s2.0-85219733714 | - |
| dc.identifier.volume | 4 | - |
| dc.identifier.issue | 1 | - |
