Not Your Father’s Database: How to Use Apache Spark Properly in Your Big Data Architecture

Download Slides

This session will cover a series of problems that are adequately solved with Apache Spark, as well as those that are require additional technologies to implement correctly. Here’s an example outline of some of the topics that will be covered in the talk: Problems that are perfectly solved with Apache Spark: 1) Analyzing a large set of data files. 2) Doing ETL of a large amount of data. 3) Applying Machine Learning & Data Science to a large dataset. 4) Connecting BI/Visualization tools to Apache Spark to analyze large datasets internally. Examples of problems that Apache Spark is not optimized for: 1) Random access, frequent inserts, and updates of rows of SQL tables. Databases have better performance for these use cases. 2) Supporting Incremental updates of Databases into Spark. It’s not performant to update your Spark SQL tables backed by files. Instead, you can use message queues and Spark Streaming or doing an incremental select to make sure your Spark SQL tables stay up to date with your production databases. 3) External Reporting with many concurrent requests. While Spark’s ability to cache your data in memory will allow you to get back to fast interactive querying, Spark is not meant to be optimal for supporting many concurrent requests. It’s better to use Spark to ETL your data to summary tables or some other format into a traditional database to serve your reports if you have many concurrent users to support. 4) Searching content. A Spark job can certainly be written to filter or search for any content you’d like. ElasticSearch is a specialized engine designed to return search results quicker than Spark.

Additional Reading:

  • Getting Started with Apache Spark on Databricks


    « back
  • About Vida Ha

    Vida is currently a Solutions Engineer at Databricks where her job is to onboard and support customers using Spark on Databricks Cloud. In her past, she worked on scaling Square's Reporting Analytics System. She first began working with distributed computing at Google, where she improved search rankings of mobile-specific web content and built and tuned language models for speech recognition using a year's worth of Google search queries. She's passionate about accelerating the adoption of Apache Spark to bring the combination of speed and scale of data processing to the mainstream.