Fault Tolerance in Spark: Lessons Learned from Production - Databricks

Fault Tolerance in Spark: Lessons Learned from Production

Download Slides

Spark is by its nature very fault tolerant. However, faults, and application failures, can and do happen, in production at scale.In this talk, we’ll discuss the nuts and bolts of fault tolerance in Spark. We will begin with a brief overview of the sorts of fault tolerance offered, and lead into a deep dive of the internals of fault tolerance. This will include a discussion of Spark on YARN, scheduling, and resource allocation. We will then spend some time on a case study and discussing some tools used to find and verify fault tolerance issues. Our case study comes from a customer who experienced an application outage that was root caused to a scheduler bug. We discuss the analysis we did to reach this conclusion and the work that we did to reproduce it locally. We highlight some of the techniques used to simulate faults and find bugs. At the end, we’ll discuss some future directions for fault tolerance improvements in Spark, such as scheduler and checkpointing changes.

Learn more:

  • Improved Fault-tolerance and Zero Data Loss in Apache Spark Streaming
  • Easy, Scalable, Fault-Tolerant Stream Processing with Structured Streaming in Apache Spark
  • About Jose Soltren

    José is a Software Engineer at Cloudera focused on Spark development. His core focus is on the internals of Spark as they matter to customers: reliability, correctness, and performance. He holds Master's and Bachelor's degrees from MIT.