Dr. Kazuaki Ishizaki

Research Staff, IBM

Dr. Kazuaki Ishizaki is a research staff member at IBM Research – Tokyo. He has over 20 years of experience conducting research and development of dynamic compilers for Java and other languages. He is an expert in compiler optimizations, runtime systems, and parallel processing. He has been working for IBM Java just-in-time compiler and virtual machine from JDK 1.0 to Java 8. His research has focused on how system software can enable programmers to transparently exploit SIMD/GPUs in high-level languages and frameworks. He makes many contributions to Spark, especially in Catalyst and Tungsten. He is an ACM senior member.

SESSIONS

Transparent GPU Exploitation on Apache Spark

Graphics Processing Units (GPUs) are becoming popular for achieving high performance of computation intensive workloads. The GPU offers thousands of cores for floating point computation. This is beneficial to machine learning algorithms that are computation intensive and are parallelizable on the Spark platform. While the current execution strategy of Spark is to execute computations for the workload across nodes, only CPUs on each node execute computation. If Spark could use GPUs on each node, users benefit from GPUs that can reduce the execution time of the algorithm and reduce the number of nodes in a cluster. Recently, while application programmers use DataFrame APIs for their application, machine learning algorithms work with RDDs that keep data across nodes for distributed computation on CPU cores. A RDD keeps data as a Scala collection class in a row-based format. The computation model of GPU can achieve high performance for contiguous data in a column-based format. For enabling efficient GPU computation on Spark, we present a column-based RDD that can keep data as an array. When we execute them on the GPU, our implementation simply copies data in the column-based RDD to the GPU device memory. Then, each GPU cores execute operations faster on the device memory. CPU cores can execute existing functions on the column-based RDD. In this session, we will give the following contribution to the Spark community (1) we give a brief design overview of transparent GPU exploitations from programmers; (2) we show our APIs to build a GPU-accelerated library using column-based RDD and show the performance gain of some programs; (3) we discuss current work for transparent GPU code generation from DataFrame APIs; The package for (2) is available at http://github.com/IBMSparkGPU/GPUEnabler

Demystifying DataFrame and Dataset

Apache Spark achieves high performance with ease of programming due to a well-balanced design between ease of usage of APIs and the state-of-the-art runtime optimization. In Spark 1.3, DataFrame API was introduced to write a SQL-like program in a declarative manner. It can achieve superior performance by leveraging advantages in Project Tungsten. In Spark 1.6, Dataset API was introduced to write a generic program, such as machine learning in a functional manner. It was also designed to achieve superior performance by reusing the advantages in Project Tungsten. The differences between DataFrame and Dataset are not fully understood in the community, and it is worth understanding these differences because it is becoming popular to write programs in Dataset and for a transition of programs from RDD to Dataset. This session will explore the differences between DataFrame and Dataset using programs that performs the same operations (e.g. filter()). Dr. Ishizaki will give several comparisons from levels of source code, SQL execution plans, SQL optimizations, generated Java code, data representations and runtime performance. He will show performance difference of the programs between DataFrame and Dataset, and will identify the cause of the difference. He will also explain opportunities and approaches to improve performance of Dataset programs by alleviating some of issues. Learn to understand the differences between DataFrame and Dataset from several views; get to know performance differences of programs, which perform the same computation, by using the DataFrame API and the Dataset API; and understand opportunities to improve performance of programs in the Dataset API. Session hashtag: #SFdev20