File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: On the risks of depersonalizing consent and the safe implementation of LLMs in medical decision-making

TitleOn the risks of depersonalizing consent and the safe implementation of LLMs in medical decision-making
Authors
KeywordsDecision Making
Ethics
Ethics- Medical
Informed Consent
Public Policy
Issue Date1-Jan-2025
PublisherBMJ Publishing Group
Citation
Journal of Medical Ethics, 2025 How to Cite?
Abstract

Zohny et al provide a proof of concept for large language model (LLM)-patient communication in medical decision-making, discussing some of the risks and potential downsides of implementing this technology. However, removing human healthcare professionals (HCPs) from medical decision-making carries further risks they do not discuss. These include risks that a conscientious HCP with appropriate training can address, including (1) diminished situational autonomy due to pressure from family members or loved ones, (2) barriers to autonomy due to the situational inflexibility of LLMs and (3) diminished comfort and trust due to lack of human empathy. While the central moral focus in the implementation of this technology should be patient consent and the process that supports it, the dehumanisation of medical decision-making risks broader negative effects on both patients and HCPs, some of which are also discussed in this article. These concerns should be addressed to minimise the harm and maximise the good that LLMs can do to enhance patient decision-making.


Persistent Identifierhttp://hdl.handle.net/10722/358478
ISSN
2023 Impact Factor: 3.3
2023 SCImago Journal Rankings: 0.952

 

DC FieldValueLanguage
dc.contributor.authorHildebrand, Carl-
dc.date.accessioned2025-08-07T00:32:34Z-
dc.date.available2025-08-07T00:32:34Z-
dc.date.issued2025-01-01-
dc.identifier.citationJournal of Medical Ethics, 2025-
dc.identifier.issn0306-6800-
dc.identifier.urihttp://hdl.handle.net/10722/358478-
dc.description.abstract<p>Zohny <em>et al</em> provide a proof of concept for large language model (LLM)-patient communication in medical decision-making, discussing some of the risks and potential downsides of implementing this technology. However, removing human healthcare professionals (HCPs) from medical decision-making carries further risks they do not discuss. These include risks that a conscientious HCP with appropriate training can address, including (1) diminished situational autonomy due to pressure from family members or loved ones, (2) barriers to autonomy due to the situational inflexibility of LLMs and (3) diminished comfort and trust due to lack of human empathy. While the central moral focus in the implementation of this technology should be patient consent and the process that supports it, the dehumanisation of medical decision-making risks broader negative effects on both patients and HCPs, some of which are also discussed in this article. These concerns should be addressed to minimise the harm and maximise the good that LLMs can do to enhance patient decision-making.<br></p>-
dc.languageeng-
dc.publisherBMJ Publishing Group-
dc.relation.ispartofJournal of Medical Ethics-
dc.rightsThis work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.-
dc.subjectDecision Making-
dc.subjectEthics-
dc.subjectEthics- Medical-
dc.subjectInformed Consent-
dc.subjectPublic Policy-
dc.titleOn the risks of depersonalizing consent and the safe implementation of LLMs in medical decision-making-
dc.typeArticle-
dc.identifier.doi10.1136/jme-2025-111017-
dc.identifier.scopuseid_2-s2.0-105010865587-
dc.identifier.eissn1473-4257-
dc.identifier.issnl0306-6800-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats