Award: Presidential Poster Award
Rintaro Hashimoto, MD1, Nabil El Hage Chehade, MD1, Kenneth J. Chang, MD, FACG2, Tyler Dao1, Andrew Ninh1, James Requa1, William Karnes, MD2, Jason Samarasena, MD, FACG1
1University of California Irvine Medical Center, Orange, CA; 2University of California Irvine, Orange, CA
Introduction: The visual detection of early esophageal neoplasia (high grade dysplasia and T1 cancer) in BE with white light and virtual chromoendoscopy is still difficult. The aim of this study is to assess if a convolutional neural artificial intelligence network can aid in the recognition of early esophageal neoplasia in BE.
Methods: Over 800 images from 65 patients were retrospectively collected of histology-proven early esophageal neoplasia in BE containing high grade dysplasia or T1 cancer (Dysplasia Group). Within each image, the area of neoplasia was masked using image annotation software by two experts in BE imaging. Over 800 control images were collected of either histology-proven or confocal endomicroscopy-proven BE without high grade dysplasia (Non -Dysplastic Group).
A training set with ~1200 images split 50/50 Dysplasia/Non-Dysplasia was used to train the algorithm. We used a convolutional neural network (CNN) and hybrid algorithm design including Inception blocks to deepen the neural net and maximize efficiency and accuracy. The algorithm was pre-trained on ImageNet and then fine-tuned with the goal to provide the correct binary classification: “Dysplastic” (1) or “Non-dysplastic” (0). Adam optimizer performed stochastic optimization of a binary cross-entropy loss function to produce a probability value between 0 and 1. A set 458 images unique of the training set was used for algorithm validation.
We additionally developed an object detection algorithm which drew localization boxes in real-time around regions classified as dysplasia. Testing was performed for near-focus images, non-near focus (far) images, white light and NBI images.
Results: The CNN analyzed 458 test images (225 dysplasia/233 non-dysplasia) and correctly detected early neoplasia in BE cases with sensitivity of 95.6% and specificity of 91.8%. The accuracy was 93.7% and the AUC was 0.94 (Fig.3). With regards to the object detection algorithm for all images in the validation set, the system was able to achieve a mean-average-precision (mAP) of 0.7533 at an intersection over union (IOU) of 0.3, Sensitivity 96.7% and Specificity 87.6% (Fig 1).
Discussion: Our AI model was able to detect early esophageal neoplasia in Barrett’s Esophagus images with 93.7% accuracy. In addition, the object detection algorithm was able to draw a localization box around the areas of dysplasia with high precision.
Citation: Rintaro Hashimoto, MD; Nabil El Hage Chehade, MD; Kenneth J. Chang, MD, FACG; Tyler Dao; Andrew Ninh; James Requa; William Karnes, MD; Jason Samarasena, MD, FACG. P0282 - HIGH ACCURACY AND EFFECTIVENESS WITH DEEP NEURAL NETWORKS AND ARTIFICIAL INTELLIGENCE IN DETECTION OF EARLY ESOPHAGEAL NEOPLASIA IN BARRETT'S ESOPHAGUS. Program No. P0282. ACG 2019 Annual Scientific Meeting Abstracts. San Antonio, Texas: American College of Gastroenterology.