Mosaic AI Model Training: Fine-Tune Your LLM on Databricks for Specialized Tasks and Knowledge

What you’ll learn

Foundation LLMs excel in global knowledge and generic tasks. However, many use cases require specific knowledge or behaviour. 

With Databricks, you can easily Fine Tune and deploy specialized OSS LLM versions that would beat baseline models:

  • Deploy and own smaller LLMs, outperforming larger ones while reducing cost and enforcing privacy/security
  • Train LLM on your own, custom business knowledge
  • Change the LLM behavior to better perform specific tasks such as Named-entity recognition (NER).

In this tutorial, you will learn how to:

  • Prepare a clean training and evaluation dataset
  • Leverage Databricks Mosaic AI Model Training to customize an existing OSS LLM (Mistral, Llama, DBRX...)
  • Deploy this model on a Model Serving endpoint, providing live inferences
  • Evaluate and benchmark the Fine Tuned model against its baseline, leveraging MLflow Evaluate

To run the demo, get a free Databricks workspace and execute the following two commands in a Python notebook:

%pip install dbdemos
import dbdemos
dbdemos.install('llm-fine-tuning', catalog='main', schema='dbdemos_llm_fine_tuning')

Recommended

<p>Lakehouse Monitoring and Vector Search</p>

On-Demand Video

Lakehouse Monitoring and Vector Search

<p><span><span><span><span><span><span>Feature Store and Online Inference</span></span></span></span></span></span></p>

Tutorial

Feature Store and Online Inference

<p>AI Functions: Query LLMs With DB SQL</p>

Tutorial

AI Functions: Query LLMs With DB SQL

<p>Build High-Quality RAG Apps with Mosaic AI Agent Framework and Agent Evaluation, Model Serving, and Vector Search</p>

Tutorial

Build High-Quality RAG Apps with Mosaic AI Agent Framework and Agent Evaluation, Model Serving, and Vector Search

Disclaimer: This tutorial leverages features that are currently in private preview. Databricks Private Preview terms apply.
For more details, open the introduction notebook.