Saurabh is a Senior Analytics Consultant in the R&D Team at Kavi Global. He is currently involved in the development of big data and analytics platforms that leverage the data processing and machine learning capabilities of Spark. He has worked in several industry verticals like Healthcare, Manufacturing, Transportation, and Logistics. Saurabh holds an MS in Industrial Engineering from University of Wisconsin-Madison.
Advanced machine vision is increasingly being used to investigate, diagnose, and identify potential remedies and their progressions for complex health issues. In this study, a behavioral neuroscientist at the University of Chicago and his colleagues have collaborated with Kavi Global to characterize 3D feeding behavior and its potential changes caused by neurological conditions such as ALS, Parkinson's disease, and stroke, or oral environmental changes such as tooth extraction and dental implants. Videos of rodents feeding on kibble are recorded by a high-speed biplanar videofluoroscopy technique (XROMM). Their feeding behavior is then analyzed by tracking radio-opaque fiducial markers implanted in their head region. The marker tracking process, until now, was manual and tedious, and was not designed to process massive amounts of longitudinal data. This session will highlight a near-automated, deep learning-based solution for detecting and tracking fiducial markers in the videos, resulting in a more efficient and robust process, with a 300+ times reduction in data processing time compared to a manual use of the existing software. Our approach involved the following steps: (i) Marker Detection-Deep Learning algorithms were used to identify the pixels corresponding to markers within each frame; (ii) Marker Tracking-Kalman filtering along with Hungarian algorithm were used for tracking markers across frames; (iii) 2D to 3D Conversion- sequence matching of videos recorded by both cameras, and triangulating marker locations in 2D track coordinates to generate 3D marker locations. The features extracted from videos would be used to characterize behaviorally relevant kinematic features such as rhythmic chewing or swallowing. The solution involved the use of TensorFlow-Python APIs and Spark. Session hashtag: #AISAIS14