Vishwanath Subramanian

Director of Data and Analytics Engineering, Starbucks

Vishwanath Subramanian is Director of Data and Analytics Engineering at Starbucks. He has over 15 years of experience with a background in applied analytics, distributed systems, data warehouses, product management and software development. At Starbucks, his key focus is providing Next Generation Analytics for the enterprise, enabling large scale data processing across various platforms and powering Machine Learning workflows for amazing customer experiences.

UPCOMING SESSIONS

Keynote 3: How Starbucks is Achieving its ‘Enterprise Data Mission’ to Enable Data and ML at Scale and Provide World-Class Customer ExperiencesSummit 2020

Starbucks makes sure that everything we do is through the lens of humanity: from our commitment to the highest quality coffee in the world, to the way we engage with our customers and communities to do business responsibly. A key aspect to ensuring those world-class customer experiences is data. This talk highlights the Enterprise Data Analytics mission at Starbucks that helps making decisions powered by data at tremendous scale. This includes everything ranging from processing data at petabyte scale with governed processes, deploying platforms at the speed-of-business and enabling ML across the enterprise. This session will detail how Starbucks has built world-class Enterprise data platforms to drive world-class customer experiences.

PAST SESSIONS

Simplify Distributed TensorFlow Training for Fast Image Categorization at StarbucksSummit 2019

In addition to the many data engineering initiatives at Starbucks, we are also working on many interesting data science initatives. The business scenarios involved in our deep learning initatives include (but are not limited to) planogram analysis (layout of our stores for efficient partner and customer flow) to predicting product pairings (e.g. purchase a caramel machiato and perhaps you would like caramel brownie) via the product components using graph convolutional networks. For this session, we will be focusing on how we can run distributed Keras (TensorFlow backend) training to perform image analytics. This will be combined with MLflow to showcase the data science lifecycle and how Databricks + MLflow simplifies it.