SESSION
Sponsored by: Google | Designing Cloud Storage for LLMs and Data-Intensive Workloads
OVERVIEW
EXPERIENCE | In Person |
---|---|
TYPE | Lightning Talk |
TRACK | Data Science and Machine Learning |
INDUSTRY | Enterprise Technology |
TECHNOLOGIES | Databricks Experience (DBX), AI/Machine Learning |
SKILL LEVEL | Beginner |
DURATION | 20 min |
Training, serving and fine tuning LLMs require large scale bandwidth from storage systems. Designing storage optimally to keep GPUs and TPUs busy becomes increasingly important. This session is for AI/ML and data practitioners who want to build AI/ML data pipelines at scale and select the right combination of block, file, and object storage solution for your use case. Learn how to optimize all your AI/ML workloads like data preparation, training, tuning, inference, and serving with the best storage solution, leveraging Databricks deployed on Google Kubernetes Engine or Vertex workflows Or Compute Engine. We’ll also dive into how to optimize analytics workloads with Cloud Storage and Anywhere Cache.
SESSION SPEAKERS
Sridevi Ravuri
/Sr. Director, R&D
Google