File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Semantic Scene Graph for Ultrasound Image Explanation and Scanning Guidance

TitleSemantic Scene Graph for Ultrasound Image Explanation and Scanning Guidance
Authors
KeywordsPoint-of-Care Ultrasound
Scene Graph
Ultrasound Image Analysis
Issue Date2026
Citation
Lecture Notes in Computer Science, 2026, v. 15968 LNCS, p. 500-510 How to Cite?
AbstractUnderstanding medical ultrasound imaging remains a long-standing challenge due to significant visual variability caused by differences in imaging and acquisition parameters. Recent advancements in large language models (LLMs) have been used to automatically generate terminology-rich summaries orientated to clinicians with sufficient physiological knowledge. Nevertheless, the increasing demand for improved ultrasound interpretability and basic scanning guidance among non-expert users, e.g., in point-of-care settings, has not yet been explored. In this study, we first introduce the scene graph (SG) for ultrasound images to explain image content to non-expert users and provide guidance for ultrasound scanning. The ultrasound SG is first computed using a transformer-based one-stage method, eliminating the need for explicit object detection. To generate a graspable image explanation for non-expert users, the user query is then used to further refine the abstract SG representation through LLMs. Additionally, the predicted SG is explored for its potential in guiding ultrasound scanning toward missing anatomies within the current imaging view, assisting ordinary users in achieving more standardized and complete anatomical exploration. The effectiveness of this SG-based image explanation and scanning guidance has been validated on images from the left and right neck regions, including the carotid and thyroid, across five volunteers. The results demonstrate the potential of the method to maximally democratize ultrasound by enhancing its interpretability and usability for non-expert users. Project page: https://noseefood.github.io/us-scene-graph/.
Persistent Identifierhttp://hdl.handle.net/10722/365367
ISSN
2023 SCImago Journal Rankings: 0.606

 

DC FieldValueLanguage
dc.contributor.authorLi, Xuesong-
dc.contributor.authorHuang, Dianye-
dc.contributor.authorZhang, Yameng-
dc.contributor.authorNavab, Nassir-
dc.contributor.authorJiang, Zhongliang-
dc.date.accessioned2025-11-05T06:55:40Z-
dc.date.available2025-11-05T06:55:40Z-
dc.date.issued2026-
dc.identifier.citationLecture Notes in Computer Science, 2026, v. 15968 LNCS, p. 500-510-
dc.identifier.issn0302-9743-
dc.identifier.urihttp://hdl.handle.net/10722/365367-
dc.description.abstractUnderstanding medical ultrasound imaging remains a long-standing challenge due to significant visual variability caused by differences in imaging and acquisition parameters. Recent advancements in large language models (LLMs) have been used to automatically generate terminology-rich summaries orientated to clinicians with sufficient physiological knowledge. Nevertheless, the increasing demand for improved ultrasound interpretability and basic scanning guidance among non-expert users, e.g., in point-of-care settings, has not yet been explored. In this study, we first introduce the scene graph (SG) for ultrasound images to explain image content to non-expert users and provide guidance for ultrasound scanning. The ultrasound SG is first computed using a transformer-based one-stage method, eliminating the need for explicit object detection. To generate a graspable image explanation for non-expert users, the user query is then used to further refine the abstract SG representation through LLMs. Additionally, the predicted SG is explored for its potential in guiding ultrasound scanning toward missing anatomies within the current imaging view, assisting ordinary users in achieving more standardized and complete anatomical exploration. The effectiveness of this SG-based image explanation and scanning guidance has been validated on images from the left and right neck regions, including the carotid and thyroid, across five volunteers. The results demonstrate the potential of the method to maximally democratize ultrasound by enhancing its interpretability and usability for non-expert users. Project page: https://noseefood.github.io/us-scene-graph/.-
dc.languageeng-
dc.relation.ispartofLecture Notes in Computer Science-
dc.subjectPoint-of-Care Ultrasound-
dc.subjectScene Graph-
dc.subjectUltrasound Image Analysis-
dc.titleSemantic Scene Graph for Ultrasound Image Explanation and Scanning Guidance-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1007/978-3-032-05114-1_48-
dc.identifier.scopuseid_2-s2.0-105017958855-
dc.identifier.volume15968 LNCS-
dc.identifier.spage500-
dc.identifier.epage510-
dc.identifier.eissn1611-3349-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats