How Adobe Does 2 Million Records Per Second Using Apache Spark!

Adobe’s Unified Profile System is the heart of its Experience Platform. It ingests TBs of data a day and is PBs large. As part of this massive growth we have faced multiple challenges in our Apache Spark deployment which is used from Ingestion to Processing. We want to share some of our learnings and hard earned lessons and as we reached this scale.

  • Repeated Queries Optimization – or the Art of How I learned to cache my physical Plans. SQL interfaces expose prepared statements , how do we use the same analogy for batch processing?
  • Know thy Join – Joins/Group By are unavoidable when you don’t have much control over the data model, But one must know what exactly happens underneath given the deadly shuffle that one might encounter.
  • Structured Streaming – Know thy Lag – While consuming off a Kafka topic which sees sporadic loads, its very important to monitor the Consumer lag. Also makes you respect what a beast backpressure is.
  • Skew! Phew! – Skewed data causes so many uncertainties especially at runtime. Configs which applied on day zero no longer apply on day 100. The code must be made resilient to Skewed datasets.
  • Sample Sample Sample – Sometimes the best way to approach a large problem is to eat a small part of it first.
  • Redis – Sometimes the best tool for the job is actually outside your JVM. Pipelining + Redis is a powerful combination to supercharge your data pipeline.

We will present our war stories and lessons for the above and hopefully will benefit the broader community.


 
Try Databricks
« back
About Yeshwanth Vijayakumar

Adobe, Inc.

I am a Project Lead/Architect on the Unified Profile Team in the Adobe Experience Platform; it's a PB scale store with a strong focus on millisecond latencies and Analytical abilities and easily one of Adobe's most challenging SaaS projects in terms of scale. I am actively designing/implementing the Interactive segmentation capabilities which helps us segment over 2 million records per second using Apache Spark. I look for opportunities to build new features using interesting data Structures and Machine Learning approaches. In a previous life, I was a ML Engineer on the Yelp Ads team building models for Snippet Optimizations.