Successfully building a machine learning model is hard enough. Reproducing your results at scale — enabling others to reproduce pipelines, comparing results from other versions, moving models into production, redeploying and rolling out updated models — is exponentially harder. To address these challenges and accelerate innovation, many companies are building custom “ML platforms” to automate the end-to-end ML lifecycle.
Join our interactive MLOps Virtual Event to hear more about the latest developments and best practices for managing the full ML lifecycle on Databricks with MLflow. We will cover a checklist of capabilities you’ll need, common pitfalls, technological and organizational challenges, and how to overcome them.
Presentations will be enhanced with demos, as well as success stories and learnings from experts who have deployed real-world examples for forecasting, IoT analytics and more. Live Q&As and discussions will keep this event engaging for data science leaders and practitioners alike.
Agenda at a glance:
- Opening Keynote, Product Demo and Live Q&As
- MLOps and ML Platforms State of the Industry, opening Keynote by Matei Zaharia, Co-founder and Chief Technology Officer, Databricks and Clemens Mewald, Director of Product Management, Databricks
- Operationalizing Data Science & ML on Databricks using MLflow (Demo) by Sean Owen
- Customer Stories: Building ML platforms for real-world use cases
- “Can you email me your code?” – Data Science best practices and how to facilitate knowledge management, reproducibility, and collaboration for the full data science workflow at Blue Cross.
- Automatic Forecasting using Prophet, Databricks, Delta Lake and MLflow – Creating a robust, fault-tolerant, auditable and reproducible ML pipeline at Atlassian.
- Reducing energy waste with IoT analytics and Machine Learning – Automating end to end workflows from data ingestion, to featurization, prototyping, validation, re-training, and productionization of models at Quby.
- Continuous Delivery of Models at Outreach – An overview of a framework to effectively manage and coordinate data, models and artifacts at different machine learning stages and a CI/CD pipeline that provides continuous integration and delivery of models into a RESTful endpoint to serve the production usage.
- Closing Panel and Q&As
- Closing panel and interactive Q&As hosted by Ben Lorica, Chief Data Scientist, Databricks