LLMs in Production: Fine-Tuning, Scaling, and Evaluation
OVERVIEW
EXPERIENCE | In Person |
---|---|
TYPE | Breakout |
TRACK | Generative AI |
INDUSTRY | Enterprise Technology |
TECHNOLOGIES | AI/Machine Learning, ETL, GenAI/LLMs |
SKILL LEVEL | Intermediate |
DURATION | 40 min |
DOWNLOAD SESSION SLIDES |
We will dive into the practicalities of deploying LLMs in business settings. We'll explore when to leverage LLMs and address how to minimize the complexity of the problem. Our discussion will guide you through designing an evaluation methodology and detail the circumstances necessitating fine-tuning for optimal performance. We will elaborate on the nuances of training data selection, establishing a flexible training ecosystem, hyperparameter optimization, scalable training, and finetuning workflows. As part of the practical session, we will go through the ETL process, how to format and structure data for finetuning, and how to organize, save, and manage these datasets. We will demonstrate a few finetuning configurations, show you how to monitor and evaluate your finetuned LLMs, and collect further datasets to improve your finetuned LLM over time.
SESSION SPEAKERS
Brian Law
/Sr Specialist Solutions Architect
Databricks
Nathan Azrak
/Senior Machine Learning Engineer
Atlassian