AUTOMATED STOOL IMAGE ANALYSIS BY ARTIFICIAL INTELLIGENCE IN A SMART TOILET
Presentation Number: Sa652View Presentation
AuthorBlock: Sonia Grego1, Jin Zhou1, Krishnendu Chakrabarty1, Brian Stoner1, Jose R. Ruiz1, Deborah A. Fisher1
1Duke University, Durham, North Carolina, United States;
Introduction: Patient self-report of stool characteristics is critical for diagnosis and management of many acute and chronic gastrointestinal (GI) conditions such as bleeding, infection, and inflammatory bowel disease. However, self-report is limited by inaccuracy and the burden of tracking. We have developed a Smart Toilet which images stools in toilet plumbing after flushing and outside the purview of the user (figure 1). The goal of this project was to develop and test machine learning algorithms to characterize stool form and the presence of gross blood in stool images.
Methods: We constructed a dataset of images of stool in a toilet from anonymously uploaded photos by research participants and images from the internet. Participants were adults from the general public recruited by social media and internet postings. The algorithm results were compared to the independent annotation of Bristol Stool Scale (BSS) and visible blood classifications by a gastroenterologist. A subset of images was independently annotated by two gastroenterologists; interrater reliability was estimated by Cohen’s Kappa coefficient (k). Computationally efficient convolutional neural networks (CNNs) were used for stool form classification. Blood detection (red/maroon blood or melena) leveraged perceptual color quantization coupled with mutual information for feature selection. The outcomes were balanced accuracy of the machine learning algorithms and area under the curve (AUC) of receiver operator characteristic (ROC) analysis.
Results: We collected 3629 unique stool images: 2720 from the internet and 909 images uploaded by research participants. 256 online images and 45 participant-uploaded images were excluded for poor image quality leaving 3328 images for analysis. The dataset spanned the 7 BSS classifications which was also collapsed into 3 categories: loose, normal, constipated. A total of 552 images were annotated by two gastroenterologists. The interrater reliability for BSS categories was k = 0.435; for the consolidated categories the k = 0.545 (satisfactory agreement). Stool form classification achieved a balanced accuracy of 85.1% using a computationally efficient CNN MobileNetV2 compatible with microcontroller operation within the Smart Toilet system. Gross blood detection had a balanced accuracy of 76.3% by the decision tree classifier. The AUCs for stool form for each of the 3 collapsed stool categories were all >0.91 (figure 2).
Conclusions: These results support the accuracy and feasibility of real-time automated stool characterization as part of the Smart Toilet system. We expect that patient adherence will be facilitated by foregoing the need of manual data collection. Monitoring of baseline and longitudinal physiological data from stool is a promising precision health tool for early intervention and improved clinical outcomes.