File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Exploring Intra- and Inter-Video Relation for Surgical Semantic Scene Segmentation

TitleExploring Intra- and Inter-Video Relation for Surgical Semantic Scene Segmentation
Authors
Keywordspixel-level contrast
scene segmentation
Surgical data science
temporal modelling
transformer
Issue Date2022
Citation
IEEE Transactions on Medical Imaging, 2022, v. 41, n. 11, p. 2991-3002 How to Cite?
AbstractAutomatic surgical scene segmentation is fundamental for facilitating cognitive intelligence in the modern operating theatre. Previous works rely on conventional aggregation modules (e.g., dilated convolution, convolutional LSTM), which only make use of the local context. In this paper, we propose a novel framework STswinCL that explores the complementary intra- and inter-video relations to boost segmentation performance, by progressively capturing the global context. We firstly develop a hierarchy Transformer to capture intra-video relation that includes richer spatial and temporal cues from neighbor pixels and previous frames. A joint space-time window shift scheme is proposed to efficiently aggregate these two cues into each pixel embedding. Then, we explore inter-video relation via pixel-to-pixel contrastive learning, which well structures the global embedding space. A multi-source contrast training objective is developed to group the pixel embeddings across videos with the ground-truth guidance, which is crucial for learning the global property of the whole data. We extensively validate our approach on two public surgical video benchmarks, including EndoVis18 Challenge and CaDIS dataset. Experimental results demonstrate the promising performance of our method, which consistently exceeds previous state-of-the-art approaches. Code is available at https://github.com/YuemingJin/STswinCL.
Persistent Identifierhttp://hdl.handle.net/10722/349727
ISSN
2023 Impact Factor: 8.9
2023 SCImago Journal Rankings: 3.703
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorJin, Yueming-
dc.contributor.authorYu, Yang-
dc.contributor.authorChen, Cheng-
dc.contributor.authorZhao, Zixu-
dc.contributor.authorHeng, Pheng Ann-
dc.contributor.authorStoyanov, Danail-
dc.date.accessioned2024-10-17T07:00:25Z-
dc.date.available2024-10-17T07:00:25Z-
dc.date.issued2022-
dc.identifier.citationIEEE Transactions on Medical Imaging, 2022, v. 41, n. 11, p. 2991-3002-
dc.identifier.issn0278-0062-
dc.identifier.urihttp://hdl.handle.net/10722/349727-
dc.description.abstractAutomatic surgical scene segmentation is fundamental for facilitating cognitive intelligence in the modern operating theatre. Previous works rely on conventional aggregation modules (e.g., dilated convolution, convolutional LSTM), which only make use of the local context. In this paper, we propose a novel framework STswinCL that explores the complementary intra- and inter-video relations to boost segmentation performance, by progressively capturing the global context. We firstly develop a hierarchy Transformer to capture intra-video relation that includes richer spatial and temporal cues from neighbor pixels and previous frames. A joint space-time window shift scheme is proposed to efficiently aggregate these two cues into each pixel embedding. Then, we explore inter-video relation via pixel-to-pixel contrastive learning, which well structures the global embedding space. A multi-source contrast training objective is developed to group the pixel embeddings across videos with the ground-truth guidance, which is crucial for learning the global property of the whole data. We extensively validate our approach on two public surgical video benchmarks, including EndoVis18 Challenge and CaDIS dataset. Experimental results demonstrate the promising performance of our method, which consistently exceeds previous state-of-the-art approaches. Code is available at https://github.com/YuemingJin/STswinCL.-
dc.languageeng-
dc.relation.ispartofIEEE Transactions on Medical Imaging-
dc.subjectpixel-level contrast-
dc.subjectscene segmentation-
dc.subjectSurgical data science-
dc.subjecttemporal modelling-
dc.subjecttransformer-
dc.titleExploring Intra- and Inter-Video Relation for Surgical Semantic Scene Segmentation-
dc.typeArticle-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/TMI.2022.3177077-
dc.identifier.pmid35604967-
dc.identifier.scopuseid_2-s2.0-85130826976-
dc.identifier.volume41-
dc.identifier.issue11-
dc.identifier.spage2991-
dc.identifier.epage3002-
dc.identifier.eissn1558-254X-
dc.identifier.isiWOS:000876061700003-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats