Working with Skewed Data: The Iterative Broadcast

Download Slides

Skewed data is the enemy when joining tables using Spark. It shuffles a large proportion of the data onto a few overloaded nodes, bottlenecking Spark’s parallelism and resulting in out of memory errors. The go-to answer is to use broadcast joins; leaving the large, skewed dataset in place and transmitting a smaller table to every machine in the cluster for joining. But what happens when your second table is too large to broadcast, and does not fit into memory? Or even worse, when a single key is bigger than the total size of your executor? Firstly, we will give an introduction into the problem. Secondly, the current ways of fighting the problem will be explained, including why these solutions are limited. Finally, we will demonstrate a new technique – the iterative broadcast join – developed while processing ING Bank’s global transaction data. This technique, implemented on top of the Spark SQL API, allows multiple large and highly skewed datasets to be joined successfully, while retaining a high level of parallelism. This is something that is not possible with existing Spark join types.
Session hashtag: #EUde11

« back
About Rob Keevil

Rob is a Solution Architect and developer for Big Data applications, with 10 years experience working in financial services and counter-fraud domains. He's currently working with ING Netherlands in their Advanced Analytics team. The team is an "internal start up," which aims to change the way the bank operates via data driven analytics and machine learning.

About Fokko Driesprong

Principal Code Connoisseur at GoDataDriven, is a data processing enthusiast and loves functional programming (preferably Scala). As a data engineering consultant, he helps companies to develop data-driven products. Next to his consulting work, he contributes to a variety of open-source projects. Among others, he's a committer on the Apache {Airflow, Avro, Parquet, Druid} projects and contributes to Apache {Spark, Flink, Superset, ...}.