Accelerating Model Development and Fine-Tuning on Databricks with TwelveLabs
Overview
Experience | In Person |
---|---|
Type | Breakout |
Track | Artificial Intelligence |
Industry | Enterprise Technology, Media and Entertainment, Retail and CPG - Food |
Technologies | Delta Lake, Mosaic AI, PyTorch |
Skill Level | Intermediate |
Duration | 40 min |
Scaling large language models (LLMs) and multimodal architectures requires efficient data management and computational power. NVIDIA NeMo Framework Megatron-LM on Databricks is an open source solution that integrates GPU acceleration and advanced parallelism with Databricks Delta Lakehouse, streamlining workflows for pre-training and fine-tuning models at scale. This session highlights context parallelism, a unique NeMo capability for parallelizing over sequence lengths, making it ideal for video datasets with large embeddings. Through the case study of TwelveLabs’ Pegasus-1 model, learn how NeMo empowers scalable multimodal AI development, from text to video processing, setting a new standard for LLM workflows.
Session Speakers
Aiden Lee
/Chief Technology Officer & Co-Founder
Twelve Labs, Inc
Mansi Manohara
/Solutions Architect - NVIDIA
NVIDIA