Dongjoon Hyun

Software Engineer, Apple

I’m a software engineer and my main focus area is a fast and efficient data processing. At Apple, as an Apache Spark and ORC PMC member, I develop and maintain the internal distributions powered by Apache Spark and Apache ORC.

Past sessions

Summit 2021 The Rise of ZStandard: Apache Spark/Parquet/ORC/Avro

May 27, 2021 11:35 AM PT

Zstandard is a fast compression algorithm which you can use in Apache Spark in various way. In this talk, I briefly summarized the evolution history of Apache Spark in this area and four main use cases and the benefits and the next steps:
1) ZStandard can optimize Spark local disk IO by compressing shuffle files significantly. This is very useful in K8s environments. It's beneficial not only when you use `emptyDir` with `memory` medium, but also it maximizes OS cache benefit when you use shared SSDs or container local storage. In Spark 3.2, SPARK-34390 takes advantage of ZStandard buffer pool feature and its performance gain is impressive, too.
2) Event log compression is another area to save your storage cost on the cloud storage like S3 and to improve the usability. SPARK-34503 officially switched the default event log compression codec from LZ4 to Zstandard.
3) Zstandard data file compression can give you more benefits when you use ORC/Parquet files as your input and output. Apache ORC 1.6 supports Zstandardalready and Apache Spark enables it via SPARK-33978. The upcoming Parquet 1.12 will support Zstandard compression.
4) Last, but not least, since Apache Spark 3.0, Zstandard is used to serialize/deserialize MapStatus data instead of Gzip.

There are more community works to utilize Zstandard to improve Spark. For example, Apache Avro community also supports Zstandard and SPARK-34479 aims to support Zstandard in Spark's avro file format in Spark 3.2.0.

In this session watch:
Dongjoon Hyun, Software Engineer, Apple
Pang Wu, Software Engineer, Apple

[daisna21-sessions-od]

Summit 2020 Native Support of Prometheus Monitoring in Apache Spark 3.0

June 23, 2020 05:00 PM PT

All production environment requires monitoring and alerting. Apache Spark also has a configurable metrics system in order to allow users to report Spark metrics to a variety of sinks. Prometheus is one of the popular open-source monitoring and alerting toolkits which is used with Apache Spark together. Previously, users can use

  1. a combination of Prometheus JMX exporter and Apache Spark JMXSink
  2. 3rd party libraries
  3. implement a custom Sink for more complex metrics like GPU resource usage

Apache Spark 3.0.0 will add another easy way to support Prometheus for general use cases. In this talk, we will talk about the followings and show a demo.

  1. How to enable new Prometheus features.
  2. What kind of metrics are available.
  3. General tips for monitoring and alerting on structured streaming jobs. (Spark side / Prometheus side)

Currently, Apache Spark exposes metrics at Master/Worker/Driver/Executor to integrate with the existing Prometheus server easily with a less effort. This is already available with Apache Spark 3.0.0-preview and preview2. You can try it right now.