An Adaptive Execution Engine For Apache Spark SQL - Databricks

An Adaptive Execution Engine For Apache Spark SQL

Download Slides

Catalyst is an excellent optimizer in SparkSQL, provides open interface for rule-based optimization in planning stage. However, the static (rule-based) optimization will not consider any data distribution at runtime. A technology called Adaptive Execution has been introduced since Spark 2.0 and aims to cover this part, but still pending in early stage. We enhanced the existing Adaptive Execution feature, and focus on the execution plan adjustment at runtime according to different staged intermediate outputs, like set partition numbers for joins and aggregations, avoid unnecessary data shuffling and disk IO, handle data skew cases, and even optimize the join order like CBO etc.. In our benchmark comparison experiments, this feature save huge manual efforts in tuning the parameters like the shuffled partition number, which is error-prone and misleading. In this talk, we will expose the new adaptive execution framework, task scheduling, failover retry mechanism, runtime plan switching etc. At last, we will also share our experience of benchmark 100 -300 TB scale of TPCx-BB in a hundreds of bare metal Spark cluster.

Session hashtag: EUdev4

Learn more:

  • Spark SQL Adaptive Execution Unleashes The Power of Cluster in Large Scale
  • About Carson Wang

    Carson Wang is a big data software engineer at Intel, focusing on developing and improving new big data technologies. He is an active open source contributor to the Apache Spark and Alluxio projects. He is also a core developer and maintainer of HiBench - an open sourced bigdata micro benchmark suite. Prior to Intel, Carson worked for Microsoft on Windows Azure.