Dillon Bostwick - Databricks

Dillon Bostwick

Solutions Architect, Databricks

Dillon Bostwick has spent the last 2 years as a Solutions Architect at Databricks, where he provides support and architectural guidance for customers using the Databricks’ managed Spark, MLFlow, and Delta Lake platform for their machine learning and big data projects. He is also active in developing new projects in the field that apply devops best practices to accelerate the deployment of Databricks’ infrastructure. Previously, Dillon was a software engineer in the software defined networking space and security space. He was a technical lead for an application intended to automate the orchestration of virtualized network elements at scale.

UPCOMING SESSIONS

From DevOps to MLOps: Bridging the Gap between Traditional CI/CD and Modern Machine LearningSummit 2020

Continuous Integration and Continuous Delivery are common patterns in software development methodologies over the last decade. CI and CD have evolved and converged but the core message remains: build repeatable processes, automate and version everything. These learnings are based off of decades of experience in traditional software engineering. In parallel, we observe a number of distinct but similar concerns in machine learning: governance, reproducibility, observability, and concept drift, not to mention concerns brought to light in the Google NIPS classic 'Hidden Technical Debt in Machine Learning Systems,' such as entangled data dependencies. For example, a deployed model must be monitored and updated regularly to prevent decay due to concept and data drift. As a result, new paradigms such as MLOps and Martin Fowler's 'CD4ML' (Fowler, 2019) seek to acknowledge the potential application of CI/CD principles to data science, while recognizing the fundamental differences between traditional software and machine learning. We cannot brute force the learnings and wisdom of traditional software development on data science without deep understanding of the cultural and technical differences, which often extend beyond modeling into data preparation and monitoring.

Key Takeaways:

  1. Build a CI/CD pipeline for the path from data, training, tuning, QA, production, monitoring.
  2. Train and tune an ML model in a reproducible Docker or Conda environment using data managed and versioned by Delta Lake.
  3. Automate your QA processes using MLFlow. Use CD best practices to deploy and monitor the model on a Kubernetes cluster or Databricks Spark cluster.
  4. Detect and remediate decay due to data drift and concept drift.

PAST SESSIONS