SESSION

Benchmarking Data and AI Platforms: How to Choose and use Good Benchmarks (repeated)

OVERVIEW

EXPERIENCEIn Person
TYPEBreakout
TRACKData Lakehouse Architecture
INDUSTRYEnterprise Technology
TECHNOLOGIESDeveloper Experience, ETL, SQL Analytics / BI / Visualizations
SKILL LEVELIntermediate
DURATION40 min
DOWNLOAD SESSION SLIDES

This session is repeated.

 

The data analytics field is getting crowded with a multitude of new and constantly improving tools. Practitioners look to benchmarks to help standardize the performance and TCO of the evolving field of new/improving tools and platforms. For those cynical about advertised benchmark results, we delve into the world of benchmarks to help make sense of their value to the field in general. By first exploring the flaws inherent in commonly known benchmarks –biases, lack of universality, and the rapidity with which the field moves - we can then focus on the criteria for selecting useful benchmarks, including practical guidelines and tools for evaluating the relevance, accuracy, and applicability of various benchmarks to your unique situation. Our session is not just theoretical; it includes real-world case studies and results, a nuanced view of how benchmarks can be both a potent tool and a misleading guide, and potential future benchmarks to fill gaps in modern data architecture.

SESSION SPEAKERS

Shannon Barrow

/Lead Solutions Architect
Databricks