File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: AGNet: Attention-guided network for surgical tool presence detection

TitleAGNet: Attention-guided network for surgical tool presence detection
Authors
KeywordsDeep learning
Attention-guided network
Cholecystectomy
Surgical tool recognition
Laparoscopic videos
Issue Date2017
PublisherSpringer.
Citation
Third International Workshop on Deep Learning in Medical Image Analysis (DLMIA 2017), and 7th International Workshop on Multimodal Learning for Clinical Decision Support (ML-CDS 2017), Held in Conjunction with MICCAI 2017, Québec City, Canada, 14 September 2017. In Cardoso, MJ, Arbel, T, Carneiro, G, et al. (Eds.), Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support: Third International Workshop, DLMIA 2017, and 7th International Workshop, ML-CDS 2017, Held in Conjunction with MICCAI 2017, Québec City, QC, Canada, September 14, Proceedings, p. 186-194. Cham, Switzerland: Springer, 2017 How to Cite?
AbstractWe propose a novel approach to automatically recognize the presence of surgical tools in surgical videos, which is quite challenging due to the large variation and partially appearance of surgical tools, the complicated surgical scenes, and the co-occurrence of some tools in the same frame. Inspired by human visual attention mechanism, which first orients and selects some important visual cues and then carefully analyzes these focuses of attention, we propose to first leverage a global prediction network to obtain a set of visual attention maps and a global prediction for each tool, and then harness a local prediction network to predict the presence of tools based on these attention maps. We apply a gate function to obtain the final prediction results by balancing the global and the local predictions. The proposed attention-guided network (AGNet) achieves state-of-the-art performance on m2cai16-tool dataset and surpasses the winner in 2016 by a significant margin.
Persistent Identifierhttp://hdl.handle.net/10722/299561
ISBN
ISSN
2023 SCImago Journal Rankings: 0.606
ISI Accession Number ID
Series/Report no.Lecture Notes in Computer Science ; 10553

 

DC FieldValueLanguage
dc.contributor.authorHu, Xiaowei-
dc.contributor.authorYu, Lequan-
dc.contributor.authorChen, Hao-
dc.contributor.authorQin, Jing-
dc.contributor.authorHeng, Pheng Ann-
dc.date.accessioned2021-05-21T03:34:40Z-
dc.date.available2021-05-21T03:34:40Z-
dc.date.issued2017-
dc.identifier.citationThird International Workshop on Deep Learning in Medical Image Analysis (DLMIA 2017), and 7th International Workshop on Multimodal Learning for Clinical Decision Support (ML-CDS 2017), Held in Conjunction with MICCAI 2017, Québec City, Canada, 14 September 2017. In Cardoso, MJ, Arbel, T, Carneiro, G, et al. (Eds.), Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support: Third International Workshop, DLMIA 2017, and 7th International Workshop, ML-CDS 2017, Held in Conjunction with MICCAI 2017, Québec City, QC, Canada, September 14, Proceedings, p. 186-194. Cham, Switzerland: Springer, 2017-
dc.identifier.isbn9783319675572-
dc.identifier.issn0302-9743-
dc.identifier.urihttp://hdl.handle.net/10722/299561-
dc.description.abstractWe propose a novel approach to automatically recognize the presence of surgical tools in surgical videos, which is quite challenging due to the large variation and partially appearance of surgical tools, the complicated surgical scenes, and the co-occurrence of some tools in the same frame. Inspired by human visual attention mechanism, which first orients and selects some important visual cues and then carefully analyzes these focuses of attention, we propose to first leverage a global prediction network to obtain a set of visual attention maps and a global prediction for each tool, and then harness a local prediction network to predict the presence of tools based on these attention maps. We apply a gate function to obtain the final prediction results by balancing the global and the local predictions. The proposed attention-guided network (AGNet) achieves state-of-the-art performance on m2cai16-tool dataset and surpasses the winner in 2016 by a significant margin.-
dc.languageeng-
dc.publisherSpringer.-
dc.relation.ispartofDeep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support: Third International Workshop, DLMIA 2017, and 7th International Workshop, ML-CDS 2017, Held in Conjunction with MICCAI 2017, Québec City, QC, Canada, September 14, Proceedings-
dc.relation.ispartofseriesLecture Notes in Computer Science ; 10553-
dc.subjectDeep learning-
dc.subjectAttention-guided network-
dc.subjectCholecystectomy-
dc.subjectSurgical tool recognition-
dc.subjectLaparoscopic videos-
dc.titleAGNet: Attention-guided network for surgical tool presence detection-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1007/978-3-319-67558-9_22-
dc.identifier.scopuseid_2-s2.0-85029784067-
dc.identifier.spage186-
dc.identifier.epage194-
dc.identifier.eissn1611-3349-
dc.identifier.isiWOS:000463359200022-
dc.publisher.placeCham, Switzerland-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats