Convolutional neural networks: Ensemble modeling, fine-tuning and unsupervised semantic localization for neurosurgical CLE images

Mohammadhassan Izadyyazdanabadi, Evgenii Belykh, Michael Mooney, Nikolay Martirosyan, Jennifer Eschbacher, Peter Nakaji, Mark C. Preul, Yezhou Yang

Research output: Contribution to journalArticlepeer-review

23 Scopus citations

Abstract

Confocal laser endomicroscopy (CLE) is an advanced optical fluorescence technology undergoing assessment for applications in brain tumor surgery. Many of the CLE images can be distorted and interpreted as nondiagnostic. However, just one neat CLE image might suffice for intraoperative diagnosis of the tumor. While manual examination of thousands of nondiagnostic images during surgery would be impractical, this creates an opportunity for a model to select diagnostic images for the pathologists or surgeons review. In this study, we sought to develop a deep learning model to automatically detect the diagnostic images. We explored the effect of training regimes and ensemble modeling and localized histological features from diagnostic CLE images. The developed model could achieve promising agreement with the ground truth. With the speed and precision of the proposed method, it has potential to be integrated into the operative workflow in the brain tumor surgery.

Original languageEnglish (US)
Pages (from-to)10-20
Number of pages11
JournalJournal of Visual Communication and Image Representation
Volume54
DOIs
StatePublished - Jul 2018

Keywords

  • Brain tumor
  • Confocal laser endomicroscopy
  • Ensemble modeling
  • Neural network
  • Surgical vision
  • Unsupervised localization

ASJC Scopus subject areas

  • Signal Processing
  • Media Technology
  • Computer Vision and Pattern Recognition
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Convolutional neural networks: Ensemble modeling, fine-tuning and unsupervised semantic localization for neurosurgical CLE images'. Together they form a unique fingerprint.

Cite this