Data Warehousing with Spark Streaming at Zalando - Databricks

Data Warehousing with Spark Streaming at Zalando

Download Slides

Zalandos AI-driven products and distributed landscape of analytical data marts cannot wait for long-running, hard-to-recover, monolithic batch jobs taking all night to calculate already outdated data. Modern data integration pipelines need to deliver fast and easy to consume data sets in high quality. Based on Spark Streaming and Delta, the central data warehousing team was able to deliver widely-used master data as S3 or Kafka streams and snapshots at the same time.

The talk will cover challenges in our fashion data platform and a detailed architectural deep dive about separation of integration from enrichment, providing streams as well as snapshots and feeding the data to distributed data marts. Finally, lessons learned and best practices about Delta’s MERGE command, Scala API vs Spark SQL and schema evolution give more insights and guidance for similar use cases.

 

Try Databricks
See More Spark + AI Summit Europe 2019 Videos

« back
About Sebastian Herold

Zalando SE

As big data architect and principal engineer, Sebastian is passionate about solving large data problems and inspiring teams with a data-driven mindset. After implementing a company-wide self-service data platform and co-founding the Data Landscape Manifesto at Scout24, Sebastian now supports Zalando with fashion data.