File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Generative AI for Eating Disorders: Linguistic Comparison With Online Support and Qualitative Analysis of Harms

TitleGenerative AI for Eating Disorders: Linguistic Comparison With Online Support and Qualitative Analysis of Harms
Authors
Keywordsartificial intelligence
eating disorders
large language models
natural language analysis
Issue Date21-Nov-2025
PublisherWiley
Citation
International Journal of Eating Disorders, 2025 How to Cite?
Abstract

Objective:

Generative artificial intelligence (AI) has the potential to be used in supporting people with eating disorders (EDs), but this also presents certain risks. This study aimed at comparing the psycholinguistic attributes (language markers of cognitive, emotional, and social processes) and lexico-semantic characteristics (patterns of word choice and meaning in text), and assessing potential harms of AI responses versus human responses in online communities (OCs).

Method:

We collected pre-COVID data from Reddit communities on EDs, consisting of 3634 posts and 22,359 responses. For each post, responses were generated using four widely used state-of-the-art AI models (GPT, Gemini, Llama, and Mistral) with prompts tailored to peer support. The Linguistic Inquiry and Word Count (LIWC) lexicon was used to examine psycholinguistic features across eight dimensions, and a suite of lexico-semantic comparisons was conducted across the dimensions of linguistic structure, style, and semantics. Additionally, 100 AI-generated responses were qualitatively analyzed by clinicians to identify potential harm.

Results:

Using OC responses as a comparison, AI responses were generally longer, more polite, yet more repetitive and less creative than human responses. Empathy scores varied among models. Qualitative analysis revealed themes of possible reinforcement of ED behaviors, implicit biases (e.g., favoring weight loss), and an inability to acknowledge contextual nuances—such as insensitivity to emotional cues and overgeneralized health advice. All AI chatbots produced responses containing harmful content, such as promoting ED behaviors or biases, to varying degrees.

Discussion:

Findings highlight differences between AI and OC responses, with potential risks of harm when using AI in ED peer support. Ethical considerations include the need for safeguards to prevent reinforcement of harmful behaviors and biases. This research underscores the importance of cautious AI integration; further validation, and the development of guidelines are needed to ensure safe and effective support.


Persistent Identifierhttp://hdl.handle.net/10722/367360
ISSN
2023 Impact Factor: 4.7
2023 SCImago Journal Rankings: 1.710

 

DC FieldValueLanguage
dc.contributor.authorYim, See Heng-
dc.contributor.authorYoo, Dong Whi-
dc.contributor.authorPolymerou, Apostolos-
dc.contributor.authorLiu, Yuqi-
dc.contributor.authorSaha, Koustuv-
dc.date.accessioned2025-12-10T08:06:45Z-
dc.date.available2025-12-10T08:06:45Z-
dc.date.issued2025-11-21-
dc.identifier.citationInternational Journal of Eating Disorders, 2025-
dc.identifier.issn0276-3478-
dc.identifier.urihttp://hdl.handle.net/10722/367360-
dc.description.abstract<p>Objective: <br></p><p>Generative artificial intelligence (AI) has the potential to be used in supporting people with eating disorders (EDs), but this also presents certain risks. This study aimed at comparing the psycholinguistic attributes (language markers of cognitive, emotional, and social processes) and lexico-semantic characteristics (patterns of word choice and meaning in text), and assessing potential harms of AI responses versus human responses in online communities (OCs). <br></p><p>Method: <br></p><p>We collected pre-COVID data from Reddit communities on EDs, consisting of 3634 posts and 22,359 responses. For each post, responses were generated using four widely used state-of-the-art AI models (GPT, Gemini, Llama, and Mistral) with prompts tailored to peer support. The Linguistic Inquiry and Word Count (LIWC) lexicon was used to examine psycholinguistic features across eight dimensions, and a suite of lexico-semantic comparisons was conducted across the dimensions of linguistic structure, style, and semantics. Additionally, 100 AI-generated responses were qualitatively analyzed by clinicians to identify potential harm. <br></p><p>Results:<br></p><p> Using OC responses as a comparison, AI responses were generally longer, more polite, yet more repetitive and less creative than human responses. Empathy scores varied among models. Qualitative analysis revealed themes of possible reinforcement of ED behaviors, implicit biases (e.g., favoring weight loss), and an inability to acknowledge contextual nuances—such as insensitivity to emotional cues and overgeneralized health advice. All AI chatbots produced responses containing harmful content, such as promoting ED behaviors or biases, to varying degrees. <br></p><p>Discussion: <br></p><p>Findings highlight differences between AI and OC responses, with potential risks of harm when using AI in ED peer support. Ethical considerations include the need for safeguards to prevent reinforcement of harmful behaviors and biases. This research underscores the importance of cautious AI integration; further validation, and the development of guidelines are needed to ensure safe and effective support.</p>-
dc.languageeng-
dc.publisherWiley-
dc.relation.ispartofInternational Journal of Eating Disorders-
dc.rightsThis work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.-
dc.subjectartificial intelligence-
dc.subjecteating disorders-
dc.subjectlarge language models-
dc.subjectnatural language analysis-
dc.titleGenerative AI for Eating Disorders: Linguistic Comparison With Online Support and Qualitative Analysis of Harms-
dc.typeArticle-
dc.identifier.doi10.1002/eat.24604-
dc.identifier.scopuseid_2-s2.0-105022621370-
dc.identifier.eissn1098-108X-
dc.identifier.issnl0276-3478-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats