Skip to main content

Machine Learning Model Deployment

This course is designed to introduce three primary machine learning deployment strategies and illustrate the implementation of each strategy on Databricks. Following an exploration of the fundamentals of model deployment, the course delves into batch inference, offering hands-on demonstrations and labs for utilizing a model in batch inference scenarios, along with considerations for performance optimization. The second part of the course comprehensively covers pipeline deployment, while the final segment focuses on real-time deployment. Participants will engage in hands-on demonstrations and labs, deploying models with Model Serving and utilizing the serving endpoint for real-time inference.


Note:

1. This is the third course in the 'Machine Learning with Databricks’ series.

2. Databricks Academy is transitioning from video lectures to a more streamlined PDF format with slides and notes for all self-paced courses. Please note that demo videos will still be available in their original format. We would love to hear your thoughts on this change, so please share your feedback through the course survey at the end. Thank you for being a part of our learning community!

Skill Level
Associate
Duration
3h
Prerequisites

At a minimum, you should be familiar with the following before attempting to take this content:

• Familiarity with the Databricks Data Intelligence Platform and basic workspace operations (create clusters, run code in notebooks, use basic notebook operations, import repos from git)

• Intermediate programming experience with Python, including data manipulation libraries (pandas, numpy) and working with APIs (databricks-sdk, REST endpoints)

• Basic knowledge of MLflow for experiment tracking, model logging, model registry operations, and model versioning

• Understanding of machine learning fundamentals, including model training, evaluation, batch inference, and real-time deployment concepts

• Intermediate experience with Unity Catalog for data governance and model registry management

• Basic familiarity with Feature Engineering concepts, including feature tables, feature lookups, and offline vs online feature stores

• Understanding of Delta Lake operations (create tables, perform updates, optimize files, and liquid clustering) and data storage optimization techniques

• Basic knowledge of Apache Spark and PySpark for distributed data processing and User Defined Functions (UDFs)

Self-Paced

Custom-fit learning paths for data, analytics, and AI roles and career paths through on-demand videos

See all our registration options

Registration options

Databricks has a delivery method for wherever you are on your learning journey

Runtime

Self-Paced

Custom-fit learning paths for data, analytics, and AI roles and career paths through on-demand videos

Register now

Instructors

Instructor-Led

Public and private courses taught by expert instructors across half-day to two-day courses

Register now

Learning

Blended Learning

Self-paced and weekly instructor-led sessions for every style of learner to optimize course completion and knowledge retention. Go to Subscriptions Catalog tab to purchase

Purchase now

Scale

Skills@Scale

Comprehensive training offering for large scale customers that includes learning elements for every style of learning. Inquire with your account executive for details

Upcoming Public Classes

Get Started with Lakebase

This get started course introduces Databricks Lakebase, a fully managed PostgreSQL service built into the Databricks Data Intelligence Platform that brings operational (OLTP) and analytical (OLAP) workloads closer together.

The course begins with a conceptual lecture that compares OLTP and OLAP systems, explaining their different performance characteristics, storage models, and typical use cases. You will also explore the challenges organizations face when maintaining separate transactional databases and analytical platforms, including data movement, latency, and architectural complexity.

You will then learn how Databricks Lakebase helps address these challenges by providing a PostgreSQL-compatible operational database that integrates directly with the Databricks Lakehouse, enabling operational applications and analytics to work together within a unified platform.

Through hands-on labs, you will:

Create and explore a Lakebase project using autoscaling compute

• Navigate the Lakebase UI, including branching, monitoring, and configuration settings

• Create and query tables using the Lakebase SQL Editor

• Query Lakebase data from Databricks using Lakehouse Federation and foreign catalogs

• Perform Reverse ETL by synchronizing Delta tables to Lakebase

• Connect to Lakebase from Python and perform basic CRUD operations

This is a Get Started course, so the focus is on understanding the core concepts and basic workflows for working with Lakebase. Building full production applications on top of Lakebase is outside the scope of this course.

Note: For SCORM lecture files, please ensure that you close the SCORM window after completing the content. Do not click the ‘Next Lesson’ button, as doing so may prevent the SCORM module from being marked as complete.

Paid & Subscription
3h
Lab
Onboarding

Questions?

If you have any questions, please refer to our Frequently Asked Questions page.