Running a global, world-class business with data-driven decision making requires ingesting and processing diverse sets of data at tremendous scale. How does a company achieve this while ensuring quality and honoring their commitment as responsible stewards of data? This session will detail how Starbucks has embraced big data, building robust, high-quality pipelines for faster insights to drive world-class customer experiences.
Brad May is a Principal Systems Engineer with Starbucks Coffee Company with over twenty years experience in Data Engineering and Systems Development from point solutions, enterprise-scale data warehouses, and most recently, Big Data solutions in the Cloud. He is currently focused on establishing best practices for efficient and highly resilient data pipelines for the Starbucks Business Intelligence and Data Services team.
Arjit Dhavale is an Engineering Manager with Starbucks Coffee Company with over 12 years of experience in architecting and implementing enterprise-scale applications on the Cloud Platform. His primary focus is on Big Data Engineering, Machine Learning, and Storage & Analytics issues. At Starbucks, his focus is on the evolvement and delivery of technology to enable large scale data processing for supporting machine learning and AI analysis on Enterprise Data Analytics Platform.
Denny Lee is a Developer Advocate at Databricks. He is a hands-on distributed systems and data sciences engineer with extensive experience developing internet-scale infrastructure, data platforms, and predictive analytics systems for both on-premise and cloud environments. He also has a Masters of Biomedical Informatics from Oregon Health and Sciences University and has architected and implemented powerful data solutions for enterprise Healthcare customers. His current technical focuses include Distributed Systems, Apache Spark, Deep Learning, Machine Learning, and Genomics.