Sim Simeonov

CTO, Swoop

Sim Simeonov is the founding CTO of Swoop, a startup that brings the power of search advertising to content. Previously, Sim was the founding CTO at Ghostery, the platform for safe & fast digital experiences, and Thing Labs, a social media startup acquired by AOL. Earlier, Sim was vice president of emerging technologies and chief architect at Macromedia (now Adobe) and chief architect at Allaire, one of the first Internet platform companies. He blogs at blog.simeonov.com, tweets as @simeons and lives in the Greater Boston area with his wife, son and an adopted dog named Tye.

SESSIONS

Bulletproof Jobs: Patterns for Large-Scale Spark Processing

Big data never stops and neither should your Spark jobs. They should not stop when they see invalid input data. They should not stop when there are bugs in your code. They should not stop because of I/O-related problems. They should not stop because the data is too big. Bulletproof jobs not only keep working but they make it easy to identify and address the common problems encountered in large-scale production Spark processing: from data quality to code quality to operational issues to rising data volumes over time. In this session you will learn three key principles for bulletproofing your Spark jobs, together with the architecture and system patterns that enable them. The first principle is idempotence. Exemplified by Spark 2.0 Idempotent Append operations, it enables 10x easier failure management. The second principle is row-level structured logging. Exemplified by Spark Records, it enables 100x (yes, one hundred times) faster root cause analysis. The third principle is invariant query structure. It is exemplified by Resilient Partitioned Tables, which allow for flexible management of large scale data over long periods of time, including late arrival handling, reprocessing of existing data to deal with bugs or data quality issues, repartitioning already written data, etc. These patterns have been successfully used in production at Swoop in the demanding world of petabyte-scale online advertising.

The Smart Data Warehouse: Goal-Based Data Production

Since the invention of SQL and relational databases, data production has been about specifying how data should be transformed through queries. While Apache Spark can certainly be used as a general distributed SQL-like query engine, the power and granularity of Spark's APIs allows for a fundamentally different, and far more productive, approach. This session will introduce the principles of goal-based data production with examples ranging from ETL, to exploratory data analysis, to feature engineering for machine learning. Goal-based data production concerns itself with specifying WHAT the desired result is, leaving the details of HOW the result is achieved to the smart data warehouse running on top of Spark. That not only substantially increases productivity, but also significantly expands the audience that can work directly with Spark: from developers and data scientists to technical business users. With specific data and architecture patterns and live demos, this session will demonstrate how easy it is for any company to create its own smart data warehouse with Spark 2.x and gain the benefits of goal-based data production. Session hashtag: #SFexp10

BoF Discussion-A roadmap for extending Apache Spark

Best-suited for current Spark contributors and Spark package creators, the conversation will focus on how the open-source community can help Spark grow outside of the Apache project, which has strict criteria about what is in & out of scope.

Goal-Based Data Production: The Spark of a Revolution

Since the invention of SQL and relational databases, data production has been about specifying how data is transformed through queries. While Apache Spark can certainly be used as a general distributed query engine, the power and granularity of Spark's APIs enables a revolutionary increase in data engineering productivity: goal-based data production. Goal-based data production concerns itself with specifying WHAT the desired result is, leaving the details of HOW the result is achieved to a smart data warehouse running on top of Spark. That not only substantially increases productivity, but also significantly expands the audience that can work directly with Spark: from developers and data scientists to technical business users. With specific data and architecture patterns spanning the range from ETL to machine learning data prep and with live demos, this session will demonstrate how Spark users can gain the benefits of goal-based data production. Session hashtag: #EUent1