TY - GEN
T1 - Synchronization and analysis of multimodal bronchoscopic airway exams for early lung cancer detection
AU - Chang, Qi
AU - Daneshpajooh, Vahid
AU - Byrnes, Patrick D.
AU - Ahmad, Danish
AU - Toth, Jennifer
AU - Bascom, Rebecca
AU - Higgins, William E.
N1 - Publisher Copyright:
© COPYRIGHT SPIE. Downloading of the abstract is permitted for personal use only.
PY - 2024
Y1 - 2024
N2 - Because lung cancer is the leading cause of cancer-related deaths globally, early disease detection is vital. To help with this issue, advances in bronchoscopy have brought about three complementary noninvasive video modalities for imaging early-stage bronchial lesions along the airway walls: white-light bronchoscopy (WLB), autofluorescence bronchoscopy (AFB), and narrow-band imaging (NBI). Recent research indicates that performing a multimodal airway exam-i.e., using the three modalities together-potentially enables a more robust disease assessment than any single modality. Unfortunately, to perform a multimodal exam, the physician must manually examine each modality's video stream separately and then mentally correlate lesion observations. This process is not only extremely tedious and skill-dependent, but also poses the risk of missed lesions, thereby reducing diagnostic confidence. What is needed is a methodology and set of tools for easily leveraging the complementary information offered by these modalities. To address this need, we propose a framework for video synchronization and fusion tailored to multimodal bronchoscopic airway examination. Our framework, built into an interactive graphical system, entails a three-step process. First, for each of the three airway exams performed with a given bronchoscopic modality, several key airway video-frame landmarks are noted with respect to the patient's CT-based 3D airway tree model (CT = computed tomography), where the airway tree model serves as a reference space for the entire process. These landmarks create a set of connections between the videos and the airway tree to facilitate subsequent fine registration. Second, the landmark set, along with a set of additional video frames, which either contain detected lesions flagged by two deep-learning-based detection networks or lie between landmarks to help fill surface gaps, are finely registered to the airway tree, using a CT-video-based global registration method. Lastly, the registered frames are mapped and fused, via texture mapping, to the CT-based 3D airway tree's endoluminal surface. This enables sequential revising of synchronized multimodal surface structure and lesion locations through interactive graphical tools along a path navigating the airway tree. Results with patient multimodal bronchoscopic airway exams show the promise of our methods.
AB - Because lung cancer is the leading cause of cancer-related deaths globally, early disease detection is vital. To help with this issue, advances in bronchoscopy have brought about three complementary noninvasive video modalities for imaging early-stage bronchial lesions along the airway walls: white-light bronchoscopy (WLB), autofluorescence bronchoscopy (AFB), and narrow-band imaging (NBI). Recent research indicates that performing a multimodal airway exam-i.e., using the three modalities together-potentially enables a more robust disease assessment than any single modality. Unfortunately, to perform a multimodal exam, the physician must manually examine each modality's video stream separately and then mentally correlate lesion observations. This process is not only extremely tedious and skill-dependent, but also poses the risk of missed lesions, thereby reducing diagnostic confidence. What is needed is a methodology and set of tools for easily leveraging the complementary information offered by these modalities. To address this need, we propose a framework for video synchronization and fusion tailored to multimodal bronchoscopic airway examination. Our framework, built into an interactive graphical system, entails a three-step process. First, for each of the three airway exams performed with a given bronchoscopic modality, several key airway video-frame landmarks are noted with respect to the patient's CT-based 3D airway tree model (CT = computed tomography), where the airway tree model serves as a reference space for the entire process. These landmarks create a set of connections between the videos and the airway tree to facilitate subsequent fine registration. Second, the landmark set, along with a set of additional video frames, which either contain detected lesions flagged by two deep-learning-based detection networks or lie between landmarks to help fill surface gaps, are finely registered to the airway tree, using a CT-video-based global registration method. Lastly, the registered frames are mapped and fused, via texture mapping, to the CT-based 3D airway tree's endoluminal surface. This enables sequential revising of synchronized multimodal surface structure and lesion locations through interactive graphical tools along a path navigating the airway tree. Results with patient multimodal bronchoscopic airway exams show the promise of our methods.
KW - bronchoscopy
KW - early detection
KW - lung cancer
KW - multimodality image/ data display
KW - video summarization and synchronization
UR - http://www.scopus.com/inward/record.url?scp=85192225013&partnerID=8YFLogxK
U2 - 10.1117/12.3004212
DO - 10.1117/12.3004212
M3 - Conference contribution
AN - SCOPUS:85192225013
T3 - Progress in Biomedical Optics and Imaging - Proceedings of SPIE
BT - Medical Imaging 2024
A2 - Siewerdsen, Jeffrey H.
A2 - Rettmann, Maryam E.
PB - SPIE
T2 - Medical Imaging 2024: Image-Guided Procedures, Robotic Interventions, and Modeling
Y2 - 19 February 2024 through 22 February 2024
ER -