Dynamic Resource Allocation, Do More With Your Cluster

Download Slides

Spark allows you to configure your job to claim and release processing resources as the job needs evolve. This can allow you to run more computation on the same cluster, as workers do not stay idle for too long. In this presentation, I will go through the configuration needed to be able to use Dynamic Resource Allocation, describe the parameters available and how they affect the life cycle of the cluster. We will run a few tests on a real cluster, to see dynamic allocation in action, the effects of the parameters, and cases when using dynamic allocation is not a great idea.

About Luc Bourlier

Luc has been working on the JVM since 2002, first for IBM on the Eclipse project, in the Debugger team, where he wrote the expression evaluation engine. After a few other Eclipse projects, he went to TomTom, to recreate their data distribution platform for over-the-air services. He joined Typesafe in 2011 to work on the Eclipse plugin for Scala. Luc then switched to the Fast Data team, with a focus on deployment and interaction with other frameworks.