Yeshwanth Vijayakumar

Project Lead/Architect, Adobe, Inc.

I am a Project Lead/Architect on the Unified Profile Team in the Adobe Experience Platform; it’s a PB scale store with a strong focus on millisecond latencies and Analytical abilities and easily one of Adobe’s most challenging SaaS projects in terms of scale. I am actively designing/implementing the Interactive segmentation capabilities which helps us segment over 2 million records per second using Apache Spark. I look for opportunities to build new features using interesting data Structures and Machine Learning approaches. In a previous life, I was a ML Engineer on the Yelp Ads team building models for Snippet Optimizations.

UPCOMING SESSIONS

PAST SESSIONS

Everyday Probabilistic Data Structures for HumansSummit 2020

Processing large amounts of data for analytical or business cases is a daily occurrence for Apache Spark users. Cost, Latency and Accuracy are 3 sides of a triangle a product owner has to trade off. When dealing with TBs of data a day and PBs of data overall, even small efficiencies have a major impact on the bottom line. This talk is going to talk about practical application of the following 4 data-structures that will help design an efficient large scale data pipeline while keeping costs at check.

  1. Bloom Filters
  2. Hyper Log Log
  3. Count-Min Sketches
  4. T-digests (Bonus)
We will take the fictional example of an eCommerce company Rainforest Inc and try to answer the business questions with our PDT and Apache Spark and and not do any SQL for this.
  1. Has User John seen an Ad for this product yet?
  2. How many unique users bought Items A , B and C
  3. Who are the top Sellers today?
  4. Whats the 90th percentile of the cart Prices? (Bonus)
We will dive into how each of these data structures are calculated for Rainforest Inc and see what operations and libraries will help us achieve our results. The session will simulate a TB of data in a notebook (streaming) and will have code samples showing effective utilizations of the techniques described to answer the business questions listed above. For the implementation part we will implement the functions as Structured Streaming Scala components and Serialize the results to be queried separately to answer our questions. We would also present the cost and latency efficiencies achieved at the Adobe Experience Platform running at PB Scale by utilizing these techniques to showcase that it goes beyond theory.

How Adobe Does 2 Million Records Per Second Using Apache Spark!Summit 2020

Adobe's Unified Profile System is the heart of its Experience Platform. It ingests TBs of data a day and is PBs large. As part of this massive growth we have faced multiple challenges in our Apache Spark deployment which is used from Ingestion to Processing. We want to share some of our learnings and hard earned lessons and as we reached this scale.

  • Repeated Queries Optimization - or the Art of How I learned to cache my physical Plans. SQL interfaces expose prepared statements , how do we use the same analogy for batch processing?
  • Know thy Join - Joins/Group By are unavoidable when you don't have much control over the data model, But one must know what exactly happens underneath given the deadly shuffle that one might encounter.
  • Structured Streaming - Know thy Lag - While consuming off a Kafka topic which sees sporadic loads, its very important to monitor the Consumer lag. Also makes you respect what a beast backpressure is.
  • Skew! Phew! - Skewed data causes so many uncertainties especially at runtime. Configs which applied on day zero no longer apply on day 100. The code must be made resilient to Skewed datasets.
  • Sample Sample Sample - Sometimes the best way to approach a large problem is to eat a small part of it first.
  • Redis - Sometimes the best tool for the job is actually outside your JVM. Pipelining + Redis is a powerful combination to supercharge your data pipeline.

We will present our war stories and lessons for the above and hopefully will benefit the broader community.