File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Precise Repositioning of Robotic Ultrasound: Improving Registration-Based Motion Compensation Using Ultrasound Confidence Optimization

TitlePrecise Repositioning of Robotic Ultrasound: Improving Registration-Based Motion Compensation Using Ultrasound Confidence Optimization
Authors
KeywordsBlood vessel visualization
medical robotics
robotic ultrasound (US)
vision-based control
Issue Date2022
Citation
IEEE Transactions on Instrumentation and Measurement, 2022, v. 71, article no. 5020611 How to Cite?
AbstractRobotic ultrasound (US) imaging has been seen as a promising solution to overcome the limitations of free-hand US examinations, i.e., interoperator variability. However, the fact that robotic US systems (RUSSs) cannot react to subject movements during scans limits their clinical acceptance. Regarding human sonographers, they often react to patient movements by repositioning the probe or even restarting the acquisition, in particular for the scans of anatomies with long structures like limb arteries. To realize this characteristic, we proposed a vision-based system to monitor the subject's movement and automatically update the scan trajectory thus seamlessly obtaining a complete 3-D image of the target anatomy. The motion monitoring module is developed using the segmented object masks from RGB images. Once the subject is moved, the robot will stop and recompute a suitable trajectory by registering the surface point clouds of the object obtained before and after the movement using the iterative closest point (ICP) algorithm. Afterward, to ensure optimal contact conditions after repositioning US probe, a confidence-based fine-tuning process is used to avoid potential gaps between the probe and contact surface. Finally, the whole system is validated on a human-like arm phantom with an uneven surface, while the object segmentation network is also validated on volunteers. The results demonstrate that the presented system can react to object movements and reliably provide accurate 3-D images.
Persistent Identifierhttp://hdl.handle.net/10722/365402
ISSN
2023 Impact Factor: 5.6
2023 SCImago Journal Rankings: 1.536

 

DC FieldValueLanguage
dc.contributor.authorJiang, Zhongliang-
dc.contributor.authorDanis, Nehil-
dc.contributor.authorBi, Yuan-
dc.contributor.authorZhou, Mingchuan-
dc.contributor.authorKroenke, Markus-
dc.contributor.authorWendler, Thomas-
dc.contributor.authorNavab, Nassir-
dc.date.accessioned2025-11-05T06:55:54Z-
dc.date.available2025-11-05T06:55:54Z-
dc.date.issued2022-
dc.identifier.citationIEEE Transactions on Instrumentation and Measurement, 2022, v. 71, article no. 5020611-
dc.identifier.issn0018-9456-
dc.identifier.urihttp://hdl.handle.net/10722/365402-
dc.description.abstractRobotic ultrasound (US) imaging has been seen as a promising solution to overcome the limitations of free-hand US examinations, i.e., interoperator variability. However, the fact that robotic US systems (RUSSs) cannot react to subject movements during scans limits their clinical acceptance. Regarding human sonographers, they often react to patient movements by repositioning the probe or even restarting the acquisition, in particular for the scans of anatomies with long structures like limb arteries. To realize this characteristic, we proposed a vision-based system to monitor the subject's movement and automatically update the scan trajectory thus seamlessly obtaining a complete 3-D image of the target anatomy. The motion monitoring module is developed using the segmented object masks from RGB images. Once the subject is moved, the robot will stop and recompute a suitable trajectory by registering the surface point clouds of the object obtained before and after the movement using the iterative closest point (ICP) algorithm. Afterward, to ensure optimal contact conditions after repositioning US probe, a confidence-based fine-tuning process is used to avoid potential gaps between the probe and contact surface. Finally, the whole system is validated on a human-like arm phantom with an uneven surface, while the object segmentation network is also validated on volunteers. The results demonstrate that the presented system can react to object movements and reliably provide accurate 3-D images.-
dc.languageeng-
dc.relation.ispartofIEEE Transactions on Instrumentation and Measurement-
dc.subjectBlood vessel visualization-
dc.subjectmedical robotics-
dc.subjectrobotic ultrasound (US)-
dc.subjectvision-based control-
dc.titlePrecise Repositioning of Robotic Ultrasound: Improving Registration-Based Motion Compensation Using Ultrasound Confidence Optimization-
dc.typeArticle-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/TIM.2022.3200360-
dc.identifier.scopuseid_2-s2.0-85137564572-
dc.identifier.volume71-
dc.identifier.spagearticle no. 5020611-
dc.identifier.epagearticle no. 5020611-
dc.identifier.eissn1557-9662-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats