File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1136/jme-2025-111017
- Scopus: eid_2-s2.0-105010865587
- Find via

Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Article: On the risks of depersonalizing consent and the safe implementation of LLMs in medical decision-making
| Title | On the risks of depersonalizing consent and the safe implementation of LLMs in medical decision-making |
|---|---|
| Authors | |
| Keywords | Decision Making Ethics Ethics- Medical Informed Consent Public Policy |
| Issue Date | 1-Jan-2025 |
| Publisher | BMJ Publishing Group |
| Citation | Journal of Medical Ethics, 2025 How to Cite? |
| Abstract | Zohny et al provide a proof of concept for large language model (LLM)-patient communication in medical decision-making, discussing some of the risks and potential downsides of implementing this technology. However, removing human healthcare professionals (HCPs) from medical decision-making carries further risks they do not discuss. These include risks that a conscientious HCP with appropriate training can address, including (1) diminished situational autonomy due to pressure from family members or loved ones, (2) barriers to autonomy due to the situational inflexibility of LLMs and (3) diminished comfort and trust due to lack of human empathy. While the central moral focus in the implementation of this technology should be patient consent and the process that supports it, the dehumanisation of medical decision-making risks broader negative effects on both patients and HCPs, some of which are also discussed in this article. These concerns should be addressed to minimise the harm and maximise the good that LLMs can do to enhance patient decision-making. |
| Persistent Identifier | http://hdl.handle.net/10722/358478 |
| ISSN | 2023 Impact Factor: 3.3 2023 SCImago Journal Rankings: 0.952 |
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Hildebrand, Carl | - |
| dc.date.accessioned | 2025-08-07T00:32:34Z | - |
| dc.date.available | 2025-08-07T00:32:34Z | - |
| dc.date.issued | 2025-01-01 | - |
| dc.identifier.citation | Journal of Medical Ethics, 2025 | - |
| dc.identifier.issn | 0306-6800 | - |
| dc.identifier.uri | http://hdl.handle.net/10722/358478 | - |
| dc.description.abstract | <p>Zohny <em>et al</em> provide a proof of concept for large language model (LLM)-patient communication in medical decision-making, discussing some of the risks and potential downsides of implementing this technology. However, removing human healthcare professionals (HCPs) from medical decision-making carries further risks they do not discuss. These include risks that a conscientious HCP with appropriate training can address, including (1) diminished situational autonomy due to pressure from family members or loved ones, (2) barriers to autonomy due to the situational inflexibility of LLMs and (3) diminished comfort and trust due to lack of human empathy. While the central moral focus in the implementation of this technology should be patient consent and the process that supports it, the dehumanisation of medical decision-making risks broader negative effects on both patients and HCPs, some of which are also discussed in this article. These concerns should be addressed to minimise the harm and maximise the good that LLMs can do to enhance patient decision-making.<br></p> | - |
| dc.language | eng | - |
| dc.publisher | BMJ Publishing Group | - |
| dc.relation.ispartof | Journal of Medical Ethics | - |
| dc.rights | This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. | - |
| dc.subject | Decision Making | - |
| dc.subject | Ethics | - |
| dc.subject | Ethics- Medical | - |
| dc.subject | Informed Consent | - |
| dc.subject | Public Policy | - |
| dc.title | On the risks of depersonalizing consent and the safe implementation of LLMs in medical decision-making | - |
| dc.type | Article | - |
| dc.identifier.doi | 10.1136/jme-2025-111017 | - |
| dc.identifier.scopus | eid_2-s2.0-105010865587 | - |
| dc.identifier.eissn | 1473-4257 | - |
| dc.identifier.issnl | 0306-6800 | - |
