On Demand
Enabling Production ML at Scale With Lakehouse
Available on demand
The journey from training a simple model to creating a production machine learning pipeline is often seen as challenging. Typically, you have to overcome hurdles such as siloed data, inconsistent tooling and overly complex infrastructure.
The key to productionizing ML is finding a solution that’s scalable, serverless and automated at every step.
Learn how the Databricks Lakehouse Platform simplifies your journey by delivering a unified, data-centric ML environment that uses the same platform, tools and governance for machine learning you already use for the rest of your data.
Watch now to learn how to:
- Ingest, prepare and process data on a platform designed to handle production-scale ML training
- Leverage data science notebooks and MLflow to train and track your ML experiments — or let AutoML do the experimentation for you
- Deploy models as serverless endpoints that autoscale extremely fast to save cost while maintaining SLAs
- Monitor your deployed models for important metrics like drift and accuracy
Speakers
Patrick Wendell
Co-founder and VP of Engineering
Databricks
Kasey Uhlenhuth
Staff Product Manager, Machine Learning
Databricks
Craig Wiley
Senior Director Product Management
Databricks
Don MacLennan
SVP Engineering and Product
Barracuda Networks