SESSION

Sponsored by: Google | Designing Cloud Storage for LLMs and Data-Intensive Workloads

Accept Cookies to Play Video

OVERVIEW

EXPERIENCEIn Person
TYPELightning Talk
TRACKData Science and Machine Learning
INDUSTRYEnterprise Technology
TECHNOLOGIESDatabricks Experience (DBX), AI/Machine Learning
SKILL LEVELBeginner
DURATION20 min

Training, serving and fine tuning LLMs require large scale bandwidth from storage systems. Designing storage optimally to keep GPUs and TPUs busy becomes increasingly important. This session is for AI/ML and data practitioners who want to build AI/ML data pipelines at scale and select the right combination of block, file, and object storage solution for your use case. Learn how to optimize all your AI/ML workloads like data preparation, training, tuning, inference, and serving with the best storage solution, leveraging Databricks deployed on Google Kubernetes Engine  or Vertex workflows Or Compute Engine. We’ll also dive into how to optimize analytics workloads with Cloud Storage and Anywhere Cache.

 

SESSION SPEAKERS

Sridevi Ravuri

/Sr. Director, R&D
Google