File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Generalized Organ Segmentation by Imitating One-Shot Reasoning Using Anatomical Correlation

TitleGeneralized Organ Segmentation by Imitating One-Shot Reasoning Using Anatomical Correlation
Authors
KeywordsOne-shot learning
Image segmentation
Anatomical similarity
Issue Date2021
PublisherSpringer.
Citation
Proceedings of the 27th International Conference on Information Processing in Medical Imaging (IPMI), Virtual Conference, 28-30 June 2021, p. 452-464 How to Cite?
AbstractLearning by imitation is one of the most significant abilities of human beings and plays a vital role in human’s computational neural system. In medical image analysis, given several exemplars (anchors), experienced radiologist has the ability to delineate unfamiliar organs by imitating the reasoning process learned from existing types of organs. Inspired by this observation, we propose OrganNet which learns a generalized organ concept from a set of annotated organ classes and then transfer this concept to unseen classes. In this paper, we show that such process can be integrated into the one-shot segmentation task which is a very challenging but meaningful topic. We propose pyramid reasoning modules (PRMs) to model the anatomical correlation between anchor and target volumes. In practice, the proposed module first computes a correlation matrix between target and anchor computerized tomography (CT) volumes. Then, this matrix is used to transform the feature representations of both anchor volume and its segmentation mask. Finally, OrganNet learns to fuse the representations from various inputs and predicts segmentation results for target volume. Extensive experiments show that OrganNet can effectively resist the wide variations in organ morphology and produce state-of-the-art results in one-shot segmentation task. Moreover, even when compared with fully-supervised segmentation models, OrganNet is still able to produce satisfying segmentation results.
Persistent Identifierhttp://hdl.handle.net/10722/301298
ISBN
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorZhou, H-
dc.contributor.authorLiu, H-
dc.contributor.authorCao, S-
dc.contributor.authorWei, D-
dc.contributor.authorLu, C-
dc.contributor.authorYu, Y-
dc.contributor.authorMa, K-
dc.contributor.authorZheng, Y-
dc.date.accessioned2021-07-27T08:09:03Z-
dc.date.available2021-07-27T08:09:03Z-
dc.date.issued2021-
dc.identifier.citationProceedings of the 27th International Conference on Information Processing in Medical Imaging (IPMI), Virtual Conference, 28-30 June 2021, p. 452-464-
dc.identifier.isbn9783030781903-
dc.identifier.urihttp://hdl.handle.net/10722/301298-
dc.description.abstractLearning by imitation is one of the most significant abilities of human beings and plays a vital role in human’s computational neural system. In medical image analysis, given several exemplars (anchors), experienced radiologist has the ability to delineate unfamiliar organs by imitating the reasoning process learned from existing types of organs. Inspired by this observation, we propose OrganNet which learns a generalized organ concept from a set of annotated organ classes and then transfer this concept to unseen classes. In this paper, we show that such process can be integrated into the one-shot segmentation task which is a very challenging but meaningful topic. We propose pyramid reasoning modules (PRMs) to model the anatomical correlation between anchor and target volumes. In practice, the proposed module first computes a correlation matrix between target and anchor computerized tomography (CT) volumes. Then, this matrix is used to transform the feature representations of both anchor volume and its segmentation mask. Finally, OrganNet learns to fuse the representations from various inputs and predicts segmentation results for target volume. Extensive experiments show that OrganNet can effectively resist the wide variations in organ morphology and produce state-of-the-art results in one-shot segmentation task. Moreover, even when compared with fully-supervised segmentation models, OrganNet is still able to produce satisfying segmentation results.-
dc.languageeng-
dc.publisherSpringer.-
dc.relation.ispartofInternational Conference on Information Processing in Medical Imaging (IPMI)-
dc.relation.ispartofLecture Notes in Computer Science (LNCS) ; v. 12729-
dc.subjectOne-shot learning-
dc.subjectImage segmentation-
dc.subjectAnatomical similarity-
dc.titleGeneralized Organ Segmentation by Imitating One-Shot Reasoning Using Anatomical Correlation-
dc.typeConference_Paper-
dc.identifier.emailYu, Y: yzyu@cs.hku.hk-
dc.identifier.authorityYu, Y=rp01415-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1007/978-3-030-78191-0_35-
dc.identifier.scopuseid_2-s2.0-85111410125-
dc.identifier.hkuros323540-
dc.identifier.spage452-
dc.identifier.epage464-
dc.identifier.isiWOS:001116085700035-
dc.publisher.placeCham-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats