Call for Presentations Spark + AI Summit Europe 2019 - Databricks

Call for Presentations
Spark + AI Summit Europe 2019

Spark + AI Summit Europe will be from October 15-17, 2019. The call for presentations is now closed.

Today data and AI work together. The best AI applications and machine learning models require a massive amount of processing power and data to build sophisticated models. We are looking for deep, technical content in these areas.

Based on past attendance, we expect an audience of data scientists and engineers, developers, researchers, and machine learning experts. We seek speakers at this awesome summit and venue in Amsterdam to share developer-focused and technical talks on a broad set of themes and topics for you to choose from:

  • Data Engineering
  • Data Science
  • AI Uses Cases and New Opportunities
  • Productionizing Machine Learning
  • Deep Learning Techniques
  • Developer
  • Technical Deep Dives
  • Python and Advanced Analytics
  • Tutorials
  • Enterprise
  • Research

Do you have big ideas, compelling stories or cases studies to share on these themes and topics, tips and tricks, how-to-and-whys, and best practices with community members embarking on the Spark + AI journey?

Have you built complex data pipelines for ETL using Apache Spark along with popular streaming engines? Or worked on complex data pipeline for productizing machine learning models at scale? Have you built deep learning models with popular frameworks that have made a difference?

If so, pen down your proposal for a 40-minute talk, 80-minute technical deep dive, or 90-minute tutorial on how-to-and-why. We’d love to put your ideas, case studies, best practices, and technical knowledge in front of the largest gathering of Spark, AI, and big data professionals.

Suggested Themes and Topics

These are just guidelines and suggestions—we are open to your creativity.

Developer

In this developer-focused theme, presenters cover technical content across a wide range of topics ranging from Spark engine internals, Spark performance and optimizations, extending or using Spark APIs, Spark SQL, machine learning to streaming.

You will be able to categorize your talk into different sections including:

  • Structured Streaming
  • Core Spark Internals
  • ETL
  • Extending or Using Spark APIs
  • Spark SQL
  • DataSources or Data Connectors

Please make sure to categorize your talk if you would like to include in a specific subtopic or category.

AI Uses Cases and New Opportunities

If you have an AI use case, case study or solved a specific problem in automating a process, device or an automaton; recognizing and analysing speech, video, image or text; improving conversational interfaces like chatbots and intelligent personal assistants or playing intelligent games—whether you used neural networks, natural language processing, rule-based engine—this thematic category is for your use case.

Share your journey of automation with the community and tell us what’s possible in this pervasive field helping innovate modern businesses.

You will be able to categorize your talk into different use case scenarios including:

  • Automation, self-driving automatons or vehicles
  • Speech, image, or video recognition
  • Intelligent personal assistant devices or chatbots
  • Learning and playing intelligent games
  • Using AI techniques in the health and life sciences, financial or retail
  • Recommendation engines
  • Other

Please make sure to categorize your talk if you would like to include in a specific subtopic or category.

Deep Learning Techniques

As a class of machine learning algorithms, deep learning has fueled the development of AI and predictive analytic applications that learn from data or transferred knowledge.

If you have implemented a real-world application—in speech recognition, image and video processing, natural language processing, recommendation engines, ad tech or mobile advertising—using any of these frameworks outlined below and the techniques they offer, this category is for you.

Share your technical details and implementation with the community and tell us your gains, pains points, and merits of your solutions.

You will be able to categorize your talk into a different use of DL frameworks

  • TensorFlow
  • PyTorch
  • Keras
  • CNTK
  • DeepLearning4J
  • MXNet
  • BigDL
  • Deep Learning Pipelines
  • TensorFlowOnSpark
  • Other

Productionizing Machine Learning

How do you build and deploy machine learning models to a production environment? How do you manage an entire machine learning life cycle? How do update your model with new features? And what are the best practices and agile data architectures that data scientists and data engineers employ to productionize machine learning models?

Whether your model is a deep learning model or a Spark MLlib machine learning model, how do you experiment, track, and score your trained model with real-time data?

If you have answers to these questions, if you have addressed them in your design, implementation, and deployment schemes in production then we want to hear from you.

Share your technical details and model implementation and deployment with the community, and tell us your gains, pains points, and merits of your solutions.

Technical Deep Dives

As the name suggests, this topic will be an 80-min slot that allows a presenter to go deeper into the topic than the normal regular 40-min sessions allow. The session should be highly technical with some demonstration and code examples. For example Scalable Monitoring of GPU Usage with TensorFlow Models Using Prometheus, A Deep Dive into Query Execution Engine of Spark SQL, Easy, Scalable, Fault-Tolerant Stream Processing with Structured Streaming in Apache Spark or Neo4j Morpheus: Interweaving Table and Graph Data with SQL and Cypher in Apache Spark are examples from previous summits.

This thematic category is not only restricted to Spark, though. It can cover deep learning practices and techniques too.

Research

Dedicated to academic and advanced industrial research, we want talks spanning systems research involving and extending Spark + AI in use cases (e.g. genomics, GPUs, I/O storage devices, MPP, self-operating automatons, image scanning and disease detection in cancer etc.).

Data Science

While Data Science is a broad theme and overlaps with deep learning, machine learning and AI, this thematic category spotlights the practice of data science using Spark, including SparkR. Sessions can cover innovative techniques, algorithms, and systems that refine raw data into actionable insight using visualization, statistics, feature engineering, and machine learning algorithms, from supervised and unsupervised learning.

Enterprise

This theme features use cases on how businesses deploy Apache Spark and the lessons learned. Talks offer an exploration into business use cases across industries, ROI, best practices, relevant business metrics, compliance requirements for specific industries, and customer testimonials.

Data Engineering

For this thematic category, we seek speakers’ experiences in building complex data infrastructure for doing advanced data analytics using Apache Spark. In particular, we want to hear how you grappled with data quality issues and complexities in building end-to-end data pipeline: from ingestion to ETL to cleaning data for consumption downstream for machine learning models or other applications.

If you have answers to how you architected, implemented, monitored, and deployed these data pipelines or how you combined streaming data with historical data from myriad sources, then we want your stories.

This is a new track we have added dedicated to the broad and commanding discipline of data engineering for advanced analytics when grappling with massive amounts of data.

Some examples of tracks in this new theme: Scaling Apache Spark at Facebook, Migrating to Apache Spark at Netflix, or Scaling Apache Spark on Kubernetes at Lyft,

Python and Advanced Analytics

Spark users can easily install PySpark through PyPi and use it for writing scalable advanced analytics applications. This theme is dedicated to talks regarding the specific use of Python with scalable data not only in writing data science and machine learning applications but also in writing Spark applications. If you have a use case implemented in PySpark and you wish to share it with the Python user community, this thematic category is for you.

Tutorials

These 90-minute talks are designed to introduce concepts followed by hands-on exercises, in the instructor’s choice of the execution environment in notebooks, allowing attendees to have a hands-on experience to learn from exercises. They are not consigned only to Spark but can also cover machine learning techniques or building deep learning models using a particular framework, or showing how to use Structured Spark’s APIs for a use case.

Some examples of tutorials are Deep Learning and Modern NLP, Building Robust Production Data Pipelines with Databricks Delta, or Writing Continuous Applications with Structured Streaming PySpark API

Required information

You’ll need to include the following information for your proposal:

  • Proposed title
  • Presentation overview and extended description
  • Suggested themes and topics from above thematic categories
  • Speaker(s): Biography and headshot
  • A video or a youtube link to the video of the speaker, if you don’t have a previous talk, please record yourself explaining your suggested talk. This is required to complete your submission.
  • Level of difficulty of your talk (beginner, intermediate, advanced)

Tips for submitting a successful proposal

Help us understand why your presentation is the right one for Spark + AI Summit. Please keep in mind that this event is by and for professionals. All presentations and supporting materials must be respectful and inclusive.

  • Be authentic. Your peers need original ideas in real-world scenarios, relevant examples, and knowledge transfer.
  • Include the summit theme of Build.Unify.Scale as part of your talk.
  • Give your proposal a simple and straightforward title.
  • Include as much detail about the presentation as possible.
  • Keep proposals free of product, marketing or sales pitch.
  • If you are not the speaker, provide the contact information of the person you’re suggesting. We tend to ignore proposals submitted by PR agencies and require that we can reach the suggested participant directly. Improve the proposal’s chances of being accepted by working closely with the presenter(s) to write a jargon-free proposal that contains a clear value for attendees.
  • Keep the audience in mind: they are professional and already pretty smart.
  • Limit the scope: in 40 minutes, you won’t be able to cover ‘everything about framework X’. Instead, pick a useful aspect, a particular technique, or walk through a simple program.
  • Your talk must be technical and show code snippets or some demonstration of working code
  • Explain why people will want to attend and what they’ll take away from it.
  • Don’t assume that your company’s name buys you credibility. If you’re talking about something important that you have specific knowledge of because of what your company does, spell that out in the description.
  • Does your presentation have the participation of a woman, person of color, or member of another group often underrepresented at a tech conference? Diversity is one of the factors we seriously consider when reviewing proposals as we seek to broaden our speaker roster.
  • All selected talks will go through a review process.