File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Deep learning for in vivo near-infrared imaging

TitleDeep learning for in vivo near-infrared imaging
Authors
KeywordsDeep learning
Near-infrared imaging
Second near-infrared window
Issue Date2021
Citation
Proceedings of the National Academy of Sciences of the United States of America, 2021, v. 118, n. 1, article no. e2021446118 How to Cite?
AbstractDetecting fluorescence in the second near-infrared window (NIR-II) up to ∼1,700 nm has emerged as a novel in vivo imaging modality with high spatial and temporal resolution through millimeter tissue depths. Imaging in the NIR-IIb window (1,500–1,700 nm) is the most effective one-photon approach to suppressing light scattering and maximizing imaging penetration depth, but relies on nanoparticle probes such as PbS/CdS containing toxic elements. On the other hand, imaging the NIR-I (700–1,000 nm) or NIR-IIa window (1,000–1,300 nm) can be done using biocompatible small-molecule fluorescent probes including US Food and Drug Administration-approved dyes such as indocyanine green (ICG), but has a caveat of suboptimal imaging quality due to light scattering. It is highly desired to achieve the performance of NIR-IIb imaging using molecular probes approved for human use. Here, we trained artificial neural networks to transform a fluorescence image in the shorter-wavelength NIR window of 900–1,300 nm (NIR-I/ IIa) to an image resembling an NIR-IIb image. With deep-learning translation, in vivo lymph node imaging with ICG achieved an unprecedented signal-to-background ratio of >100. Using preclinical fluorophores such as IRDye-800, translation of ∼900-nm NIR molecular imaging of PD-L1 or EGFR greatly enhanced tumor-to-normal tissue ratio up to ∼20 from ∼5 and improved tumor margin localization. Further, deep learning greatly improved in vivo noninvasive NIR-II light-sheet microscopy (LSM) in resolution and signal/background. NIR imaging equipped with deep learning could facilitate basic biomedical research and empower clinical diagnostics and imaging-guided surgery in the clinic.
Persistent Identifierhttp://hdl.handle.net/10722/325505
ISSN
2023 Impact Factor: 9.4
2023 SCImago Journal Rankings: 3.737
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorMa, Zhuoran-
dc.contributor.authorWang, Feifei-
dc.contributor.authorWang, Weizhi-
dc.contributor.authorZhong, Yeteng-
dc.contributor.authorDai, Hongjie-
dc.date.accessioned2023-02-27T07:33:50Z-
dc.date.available2023-02-27T07:33:50Z-
dc.date.issued2021-
dc.identifier.citationProceedings of the National Academy of Sciences of the United States of America, 2021, v. 118, n. 1, article no. e2021446118-
dc.identifier.issn0027-8424-
dc.identifier.urihttp://hdl.handle.net/10722/325505-
dc.description.abstractDetecting fluorescence in the second near-infrared window (NIR-II) up to ∼1,700 nm has emerged as a novel in vivo imaging modality with high spatial and temporal resolution through millimeter tissue depths. Imaging in the NIR-IIb window (1,500–1,700 nm) is the most effective one-photon approach to suppressing light scattering and maximizing imaging penetration depth, but relies on nanoparticle probes such as PbS/CdS containing toxic elements. On the other hand, imaging the NIR-I (700–1,000 nm) or NIR-IIa window (1,000–1,300 nm) can be done using biocompatible small-molecule fluorescent probes including US Food and Drug Administration-approved dyes such as indocyanine green (ICG), but has a caveat of suboptimal imaging quality due to light scattering. It is highly desired to achieve the performance of NIR-IIb imaging using molecular probes approved for human use. Here, we trained artificial neural networks to transform a fluorescence image in the shorter-wavelength NIR window of 900–1,300 nm (NIR-I/ IIa) to an image resembling an NIR-IIb image. With deep-learning translation, in vivo lymph node imaging with ICG achieved an unprecedented signal-to-background ratio of >100. Using preclinical fluorophores such as IRDye-800, translation of ∼900-nm NIR molecular imaging of PD-L1 or EGFR greatly enhanced tumor-to-normal tissue ratio up to ∼20 from ∼5 and improved tumor margin localization. Further, deep learning greatly improved in vivo noninvasive NIR-II light-sheet microscopy (LSM) in resolution and signal/background. NIR imaging equipped with deep learning could facilitate basic biomedical research and empower clinical diagnostics and imaging-guided surgery in the clinic.-
dc.languageeng-
dc.relation.ispartofProceedings of the National Academy of Sciences of the United States of America-
dc.subjectDeep learning-
dc.subjectNear-infrared imaging-
dc.subjectSecond near-infrared window-
dc.titleDeep learning for in vivo near-infrared imaging-
dc.typeArticle-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1073/PNAS.2021446118-
dc.identifier.pmid33372162-
dc.identifier.scopuseid_2-s2.0-85099116056-
dc.identifier.volume118-
dc.identifier.issue1-
dc.identifier.spagearticle no. e2021446118-
dc.identifier.epagearticle no. e2021446118-
dc.identifier.eissn1091-6490-
dc.identifier.isiWOS:000607270100077-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats