How to Utilize MLflow and Kubernetes to Build an Enterprise ML Platform - Databricks

How to Utilize MLflow and Kubernetes to Build an Enterprise ML Platform

Download Slides

In large enterprises, large solutions are sometimes required to tackle even the smallest tasks and ML is no different. At Comcast we are building a comprehensive, configuration based, continuously integrated and deployed platform for data pipeline transformations, model development and deployment. This is accomplished using a range of tools and frameworks such as Databricks, MLflow, Apache Spark and others. With a Databricks environment used by hundreds of researchers and petabytes of data, scale is critical to Comcast, so making it all work together in a frictionless experience is a high priority. The platform consists of a number of components: an abstraction for data pipelines and transformation to allow our data scientists the freedom to combine the most appropriate algorithms from different frameworks , experiment tracking, project and model packaging using MLflow and model serving via the Kubeflow environment on Kubernetes. The architecture, progress and current state of the platform will be discussed as well as the challenges we had to overcome to make this platform work at Comcast scale. As a machine learning practitioner, you will gain knowledge in: an example of data pipeline abstraction; ways to package and track your ML project and experiments at scale; and how Comcast uses Kubeflow on Kubernetes to bring everything together.



« back
About Nicholas Pinckernell

Nick is leading efforts on machine learning pipeline and platform strategies for the Applied AI Research team at Comcast.  He has been focusing on software development, big data, distributed computing and research in telecommunications for many years.  He is currently perusing his MS in Computer Science at UIUC and when free enjoys IoT.