Michael Shtelma - Databricks

Michael Shtelma

Solutions Architect, Databricks

Databricks Solutions Architect and ex-Teradata Data Engineer with focus on operationalizing Machine Learning workloads in cloud.

UPCOMING SESSIONS

Continuous Delivery of ML-enabled Pipelines on Databricks using MLflowSummit 2020

ML development brings many new complexities beyond the traditional software development lifecycle. ML projects, unlike software projects, after they were successfully delivered and deployed, cannot be abandoned but must be continuously monitored if model performance still satisfies all requirements. In most ML use cases, we have to deal with updates of our training set, which can influence model performance. In addition, most models require certain data pre- and post-processing in runtime, which makes the deployment process even more challenging. In this talk, we will show how MLflow can be used to build an automated CI/CD pipeline that can deploy a new version of the model and code around it to production. In addition, we will show how the same approach can be used in the data training pipeline that will retrain model on arrival of new data and deploy the new version of the model if it satisfies all requirements.

PAST SESSIONS

Managing the Complete Machine Learning Lifecycle with MLflowSummit Europe 2019

ML development brings many new complexities beyond the traditional software development lifecycle. Unlike in traditional software development, ML developers want to try multiple algorithms, tools and parameters to get the best results, and they need to track this information to reproduce work. In addition, developers need to use many distinct systems to productionize models.

To solve for these challenges, Databricks unveiled last year MLflow, an open source project that aims at simplifying the entire ML lifecycle. MLflow introduces simple abstractions to package reproducible projects, track results, and encapsulate models that can be used with many existing tools, accelerating the ML lifecycle for organizations of any size.

In the past year, the MLflow community has grown quickly: over 120 contributors from over 40 companies have contributed code to the project, and over 200 companies are using MLflow.

In this tutorial, we will show you how using MLflow can help you:

  • Keep track of experiments runs and results across frameworks.
  • Execute projects remotely on to a Databricks cluster, and quickly reproduce your runs.
  • Quickly productionize models using Databricks production jobs, Docker containers, Azure ML, or Amazon SageMaker.

We will demo the building blocks of MLflow as well as the most recent additions since the 1.0 release.

What you will learn:

  • Understand the three main components of open source MLflow (MLflow Tracking, MLflow Projects, MLflow Models) and how each help address challenges of the ML lifecycle.
  • How to use MLflow Tracking to record and query experiments: code, data, config, and results.
  • How to use MLflow Projects packaging format to reproduce runs on any platform.
  • How to use MLflow Models general format to send models to diverse deployment tools.

Prerequisites:

  • A fully-charged laptop (8-16GB memory) with Chrome or Firefox
  • Python 3 and pip pre-installed
  • Pre-Register for a Databricks Standard Trial
  • Basic knowledge of Python programming language
  • Basic understanding of Machine Learning Concepts

Managing the Complete Machine Learning Lifecycle with MLflow—continuesSummit Europe 2019

ML development brings many new complexities beyond the traditional software development lifecycle. Unlike in traditional software development, ML developers want to try multiple algorithms, tools and parameters to get the best results, and they need to track this information to reproduce work. In addition, developers need to use many distinct systems to productionize models.

To solve for these challenges, Databricks unveiled last year MLflow, an open source project that aims at simplifying the entire ML lifecycle. MLflow introduces simple abstractions to package reproducible projects, track results, and encapsulate models that can be used with many existing tools, accelerating the ML lifecycle for organizations of any size.

In the past year, the MLflow community has grown quickly: over 120 contributors from over 40 companies have contributed code to the project, and over 200 companies are using MLflow.

In this tutorial, we will show you how using MLflow can help you:

  • Keep track of experiments runs and results across frameworks.
  • Execute projects remotely on to a Databricks cluster, and quickly reproduce your runs.
  • Quickly productionize models using Databricks production jobs, Docker containers, Azure ML, or Amazon SageMaker.

We will demo the building blocks of MLflow as well as the most recent additions since the 1.0 release.

What you will learn:

  • Understand the three main components of open source MLflow (MLflow Tracking, MLflow Projects, MLflow Models) and how each help address challenges of the ML lifecycle.
  • How to use MLflow Tracking to record and query experiments: code, data, config, and results.
  • How to use MLflow Projects packaging format to reproduce runs on any platform.
  • How to use MLflow Models general format to send models to diverse deployment tools.

Prerequisites:

  • A fully-charged laptop (8-16GB memory) with Chrome or Firefox
  • Python 3 and pip pre-installed
  • Pre-Register for a Databricks Standard Trial
  • Basic knowledge of Python programming language
  • Basic understanding of Machine Learning Concepts