File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1002/eat.24604
- Scopus: eid_2-s2.0-105022621370
- Find via

Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Article: Generative AI for Eating Disorders: Linguistic Comparison With Online Support and Qualitative Analysis of Harms
| Title | Generative AI for Eating Disorders: Linguistic Comparison With Online Support and Qualitative Analysis of Harms |
|---|---|
| Authors | |
| Keywords | artificial intelligence eating disorders large language models natural language analysis |
| Issue Date | 21-Nov-2025 |
| Publisher | Wiley |
| Citation | International Journal of Eating Disorders, 2025 How to Cite? |
| Abstract | Objective: Generative artificial intelligence (AI) has the potential to be used in supporting people with eating disorders (EDs), but this also presents certain risks. This study aimed at comparing the psycholinguistic attributes (language markers of cognitive, emotional, and social processes) and lexico-semantic characteristics (patterns of word choice and meaning in text), and assessing potential harms of AI responses versus human responses in online communities (OCs). Method: We collected pre-COVID data from Reddit communities on EDs, consisting of 3634 posts and 22,359 responses. For each post, responses were generated using four widely used state-of-the-art AI models (GPT, Gemini, Llama, and Mistral) with prompts tailored to peer support. The Linguistic Inquiry and Word Count (LIWC) lexicon was used to examine psycholinguistic features across eight dimensions, and a suite of lexico-semantic comparisons was conducted across the dimensions of linguistic structure, style, and semantics. Additionally, 100 AI-generated responses were qualitatively analyzed by clinicians to identify potential harm. Results: Using OC responses as a comparison, AI responses were generally longer, more polite, yet more repetitive and less creative than human responses. Empathy scores varied among models. Qualitative analysis revealed themes of possible reinforcement of ED behaviors, implicit biases (e.g., favoring weight loss), and an inability to acknowledge contextual nuances—such as insensitivity to emotional cues and overgeneralized health advice. All AI chatbots produced responses containing harmful content, such as promoting ED behaviors or biases, to varying degrees. Discussion: Findings highlight differences between AI and OC responses, with potential risks of harm when using AI in ED peer support. Ethical considerations include the need for safeguards to prevent reinforcement of harmful behaviors and biases. This research underscores the importance of cautious AI integration; further validation, and the development of guidelines are needed to ensure safe and effective support. |
| Persistent Identifier | http://hdl.handle.net/10722/367360 |
| ISSN | 2023 Impact Factor: 4.7 2023 SCImago Journal Rankings: 1.710 |
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Yim, See Heng | - |
| dc.contributor.author | Yoo, Dong Whi | - |
| dc.contributor.author | Polymerou, Apostolos | - |
| dc.contributor.author | Liu, Yuqi | - |
| dc.contributor.author | Saha, Koustuv | - |
| dc.date.accessioned | 2025-12-10T08:06:45Z | - |
| dc.date.available | 2025-12-10T08:06:45Z | - |
| dc.date.issued | 2025-11-21 | - |
| dc.identifier.citation | International Journal of Eating Disorders, 2025 | - |
| dc.identifier.issn | 0276-3478 | - |
| dc.identifier.uri | http://hdl.handle.net/10722/367360 | - |
| dc.description.abstract | <p>Objective: <br></p><p>Generative artificial intelligence (AI) has the potential to be used in supporting people with eating disorders (EDs), but this also presents certain risks. This study aimed at comparing the psycholinguistic attributes (language markers of cognitive, emotional, and social processes) and lexico-semantic characteristics (patterns of word choice and meaning in text), and assessing potential harms of AI responses versus human responses in online communities (OCs). <br></p><p>Method: <br></p><p>We collected pre-COVID data from Reddit communities on EDs, consisting of 3634 posts and 22,359 responses. For each post, responses were generated using four widely used state-of-the-art AI models (GPT, Gemini, Llama, and Mistral) with prompts tailored to peer support. The Linguistic Inquiry and Word Count (LIWC) lexicon was used to examine psycholinguistic features across eight dimensions, and a suite of lexico-semantic comparisons was conducted across the dimensions of linguistic structure, style, and semantics. Additionally, 100 AI-generated responses were qualitatively analyzed by clinicians to identify potential harm. <br></p><p>Results:<br></p><p> Using OC responses as a comparison, AI responses were generally longer, more polite, yet more repetitive and less creative than human responses. Empathy scores varied among models. Qualitative analysis revealed themes of possible reinforcement of ED behaviors, implicit biases (e.g., favoring weight loss), and an inability to acknowledge contextual nuances—such as insensitivity to emotional cues and overgeneralized health advice. All AI chatbots produced responses containing harmful content, such as promoting ED behaviors or biases, to varying degrees. <br></p><p>Discussion: <br></p><p>Findings highlight differences between AI and OC responses, with potential risks of harm when using AI in ED peer support. Ethical considerations include the need for safeguards to prevent reinforcement of harmful behaviors and biases. This research underscores the importance of cautious AI integration; further validation, and the development of guidelines are needed to ensure safe and effective support.</p> | - |
| dc.language | eng | - |
| dc.publisher | Wiley | - |
| dc.relation.ispartof | International Journal of Eating Disorders | - |
| dc.rights | This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. | - |
| dc.subject | artificial intelligence | - |
| dc.subject | eating disorders | - |
| dc.subject | large language models | - |
| dc.subject | natural language analysis | - |
| dc.title | Generative AI for Eating Disorders: Linguistic Comparison With Online Support and Qualitative Analysis of Harms | - |
| dc.type | Article | - |
| dc.identifier.doi | 10.1002/eat.24604 | - |
| dc.identifier.scopus | eid_2-s2.0-105022621370 | - |
| dc.identifier.eissn | 1098-108X | - |
| dc.identifier.issnl | 0276-3478 | - |
