Bogdan Ghit - Databricks

Bogdan Ghit

Software Engineer, Databricks

Bogdan Ghit is a computer scientist and software engineer at Databricks, where he works on optimizing the SQL performance of Apache Spark. Prior to joining Databricks, Bogdan pursued his PhD at Delft University of Technology where he worked broadly on datacenter scheduling with a focus on data analytics frameworks such as Hadoop and Spark. His thesis has led to a large number of publications in top conferences such as ACM Sigmetrics and ACM HPDC.

UPCOMING SESSIONS

PAST SESSIONS

Scaling Data Analytics Workloads on DatabricksSummit Europe 2019

Imagine an organization with thousands of users who want to run data analytics workloads. These users shouldn’t have to worry about provisioning instances from a cloud provider, deploying a runtime processing engine, scaling resources based on utilization, or ensuring their data is secure. Nor should the organization’s system administrators.

In this talk we will highlight some of the exciting problems we’re working on at Databricks in order to meet the demands of organizations that are analyzing data at scale. In particular, data engineers attending this session will walk away with learning how we:

  • Manage a typical query lifetime through the Databricks software stack
  • Dynamically allocate resources to satisfy the elastic demands of a single cluster
  • Isolate the data and the generated state within a large organization with multiple clusters

Dynamic Partition Pruning in Apache SparkSummit Europe 2019

In data analytics frameworks such as Spark it is important to detect and avoid scanning data that is irrelevant to the executed query, an optimization which is known as partition pruning. Dynamic partition pruning occurs when the optimizer is unable to identify at parse time the partitions it has to eliminate. In particular, we consider a star schema which consists of one or multiple fact tables referencing any number of dimension tables. In such join operations, we can prune the partitions the join reads from a fact table by identifying those partitions that result from filtering the dimension tables. In this talk we present a mechanism for performing dynamic partition pruning at runtime by reusing the dimension table broadcast results in hash joins and we show significant improvements for most TPCDS queries.

Correctness and Performance of Apache Spark SQLSummit Europe 2018

In this talk, we present a comprehensive framework we developed at Databricks for assessing the correctness, stability, and performance of our Spark SQL engine. Apache Spark is one of the most actively developed open source projects, with more than 1200 contributors from all over the world. At this scale and pace of development, mistakes bound to happen. We will discuss various approaches we take, including random query generation, random data generation, random fault injection, and longevity stress tests. We will demonstrate the effectiveness of the framework by highlighting several correctness issues we have found through random query generation and critical performance regressions we were able to diagnose within hours due to our automated benchmarking tools. Session hashtag: #SAISDev10