File Download
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1038/s41746-024-01411-2
- Scopus: eid_2-s2.0-85213729831
- WOS: WOS:001388447800001
Supplementary
- Citations:
- Appears in Collections:
Article: Aligning knowledge concepts to whole slide images for precise histopathology image analysis
Title | Aligning knowledge concepts to whole slide images for precise histopathology image analysis |
---|---|
Authors | |
Issue Date | 30-Dec-2024 |
Publisher | Nature Research |
Citation | npj Digital Medicine, 2024, v. 7, n. 1 How to Cite? |
Abstract | Due to the large size and lack of fine-grained annotation, Whole Slide Images (WSIs) analysis is commonly approached as a Multiple Instance Learning (MIL) problem. However, previous studies only learn from training data, posing a stark contrast to how human clinicians teach each other and reason about histopathologic entities and factors. Here, we present a novel knowledge concept-based MIL framework, named ConcepPath, to fill this gap. Specifically, ConcepPath utilizes GPT-4 to induce reliable disease-specific human expert concepts from medical literature and incorporate them with a group of purely learnable concepts to extract complementary knowledge from training data. In ConcepPath, WSIs are aligned to these linguistic knowledge concepts by utilizing the pathology vision-language model as the basic building component. In the application of lung cancer subtyping, breast cancer HER2 scoring, and gastric cancer immunotherapy-sensitive subtyping tasks, ConcepPath significantly outperformed previous SOTA methods, which lacked the guidance of human expert knowledge. |
Persistent Identifier | http://hdl.handle.net/10722/353503 |
ISI Accession Number ID |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Zhao, Weiqin | - |
dc.contributor.author | Guo, Ziyu | - |
dc.contributor.author | Fan, Yinshuang | - |
dc.contributor.author | Jiang, Yuming | - |
dc.contributor.author | Yeung, Maximus C.F. | - |
dc.contributor.author | Yu, Lequan | - |
dc.date.accessioned | 2025-01-18T00:35:29Z | - |
dc.date.available | 2025-01-18T00:35:29Z | - |
dc.date.issued | 2024-12-30 | - |
dc.identifier.citation | npj Digital Medicine, 2024, v. 7, n. 1 | - |
dc.identifier.uri | http://hdl.handle.net/10722/353503 | - |
dc.description.abstract | Due to the large size and lack of fine-grained annotation, Whole Slide Images (WSIs) analysis is commonly approached as a Multiple Instance Learning (MIL) problem. However, previous studies only learn from training data, posing a stark contrast to how human clinicians teach each other and reason about histopathologic entities and factors. Here, we present a novel knowledge concept-based MIL framework, named ConcepPath, to fill this gap. Specifically, ConcepPath utilizes GPT-4 to induce reliable disease-specific human expert concepts from medical literature and incorporate them with a group of purely learnable concepts to extract complementary knowledge from training data. In ConcepPath, WSIs are aligned to these linguistic knowledge concepts by utilizing the pathology vision-language model as the basic building component. In the application of lung cancer subtyping, breast cancer HER2 scoring, and gastric cancer immunotherapy-sensitive subtyping tasks, ConcepPath significantly outperformed previous SOTA methods, which lacked the guidance of human expert knowledge. | - |
dc.language | eng | - |
dc.publisher | Nature Research | - |
dc.relation.ispartof | npj Digital Medicine | - |
dc.rights | This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. | - |
dc.title | Aligning knowledge concepts to whole slide images for precise histopathology image analysis | - |
dc.type | Article | - |
dc.description.nature | published_or_final_version | - |
dc.identifier.doi | 10.1038/s41746-024-01411-2 | - |
dc.identifier.scopus | eid_2-s2.0-85213729831 | - |
dc.identifier.volume | 7 | - |
dc.identifier.issue | 1 | - |
dc.identifier.eissn | 2398-6352 | - |
dc.identifier.isi | WOS:001388447800001 | - |
dc.identifier.issnl | 2398-6352 | - |