Skip to main content

On May 4th, we hosted a live webinar — Deep Learning and Apache Spark: Workflows and Best Practices. Rather than comparing deep learning systems or specific optimizations, this webinar focused on issues that are common to deep learning frameworks when running on an Apache Spark cluster, including:

  1. how to optimize cluster setup;
  2. how to ideally configure the Spark cluster;
  3. how to ingest data; and
  4. how to monitor long-running jobs?

Recording and Slides

If you missed the webinar, you can view it on-demand here and the slides are accessible as attachments to the webinar.

Q&A

Toward the end, we held a Q&A, and below are all the questions with links to the forum with their answers. (Follow the link to view the answers.)

If you’d like to perform deep learning on Databricks, start your 14-day free trial today.