Deep Learning on Apache Spark at CERN’s Large Hadron Collider with Intel Technologies

Download Slides

In this session, you will learn how CERN easily applied end-to-end deep learning and analytics pipelines on Apache Spark at scale for High Energy Physics using BigDL and Analytics Zoo open source software running on Intel Xeon-based distributed clusters.

Technical details and development learnings will be shared using an example of topology classification to improve real-time event selection at the Large Hadron Collider experiments. The classifier has demonstrated very good performance figures for efficiency, while also reducing the false positive rate compared to the existing methods. It could be used as a filter to improve the online event selection infrastructure of the LHC experiments, where one could benefit from a more flexible and inclusive selection strategy while reducing the amount of downstream resources wasted in processing false positives.

This is part of CERN’s research on applying Deep Learning and Analytics using open source and industry standard technologies as an alternative to the existing customized rule based methods. We show how we could quickly build and implement distributed deep learning solutions and data pipelines at scale on Apache Spark using Analytics Zoo and BigDL, which are open source frameworks unifying Analytics and AI on Spark with easy to use APIs and development interfaces seamlessly integrated with Big Data Platforms.

 

Try Databricks
See More Spark + AI Summit in San Francisco 2019 Videos


« back
About Riccardo Castellotti

Riccardo Castellotti is a Data Engineer at CERN with the Hadoop, Spark, Streaming and database services. He supports various user communities at CERN and in LHC experiments providing Big Data and ML solutions. He works in the context of CERN openlab on developing integrated solutions on ML and analytics in cloud environments.