Fokko Driesprong

Principal Code Connoisseur, GoDataDriven

Principal Code Connoisseur at GoDataDriven, is a data processing enthusiast and loves functional programming (preferably Scala). As a data engineering consultant, he helps companies to develop data-driven products. Next to his consulting work, he contributes to a variety of open-source projects. Among others, he’s a committer on the Apache {Airflow, Avro, Parquet, Druid} projects and contributes to Apache {Spark, Flink, Superset, …}.



Working with Skewed Data: The Iterative BroadcastSummit Europe 2017

Skewed data is the enemy when joining tables using Spark. It shuffles a large proportion of the data onto a few overloaded nodes, bottlenecking Spark's parallelism and resulting in out of memory errors. The go-to answer is to use broadcast joins; leaving the large, skewed dataset in place and transmitting a smaller table to every machine in the cluster for joining. But what happens when your second table is too large to broadcast, and does not fit into memory? Or even worse, when a single key is bigger than the total size of your executor? Firstly, we will give an introduction into the problem. Secondly, the current ways of fighting the problem will be explained, including why these solutions are limited. Finally, we will demonstrate a new technique - the iterative broadcast join - developed while processing ING Bank's global transaction data. This technique, implemented on top of the Spark SQL API, allows multiple large and highly skewed datasets to be joined successfully, while retaining a high level of parallelism. This is something that is not possible with existing Spark join types. Session hashtag: #EUde11