Weiting is a senior software engineer at Intel Software Service Group. He has worked for Big Data on Cloud Solutions(especially on Spark, Hadoop, OpenStack, and k8s/container) for several years. He has already contributed to OpenStack projects(Sahara, Cinder, Manila, Kolla) in the past two years and currently is working on Spark on Kubernetes as well as FPGA enabling in Spark. One of his responsibility is to research and enable global customers to use Big Data and Cloud solutions. Weiting is also familiar with next-generation cloud infrastructure design and researching the emerging technologies on Intel x86 platform.
In Big Data field, Spark SQL is important data processing module for Apache Spark to work with structured row-based data in a majority of operators. Field-programmable gate array(FPGA) with highly customized intellectual property(IP) can not only bring better performance but also lower power consumption to accelerate CPU-intensive segments for an application. Current Spark SQL also leverages WholeStageCodegen to improve performance by generating runtime Java functions to eliminate virtual function calls and CPU registers for intermediate data. In this session, we would like to describe how FPGA can help typical Spark SQL workloads to reduce high CPU utilization and release more CPU power by leveraging a new WholeStageCodegen to generate runtime function calls to process data with FPGA. We also use Apache Arrow to hold Columnar Batch data inside native memory and manage its memory reference inside Spark, as well leverage the Apache Arrow Gandiva for just-in-time (JIT) compiling and Columnar Batch data evaluation.
To enable FPGA support in Spark SQL, operators process multiple rows in one function call, and one batch process function can process more data with fewer time. Which is to say, leveraging FPGA accelerator, we can move the CPU-intensive functions such as data aggregation, sorting or data group-by and large data sets to use FPGA IPs and reserve CPU resource for some mission critical or complicate tasks and limit the data moving as less as possible, which can improve the performance dramatically and enable Spark SQL to drive its performance to a next level. Finally, we will use micro-benchmarks and a real-world workload as use cases to explain how and when to use these FPGA IPs to optimize your Big Data applications and identify a typical Spark SQL workload profiling, highlight the hotspots during data aggregation, sorting and group-by, and figure out which function costs higher CPU utilization.
The big data platform is evolving to be heterogeneous while the dark silicon is coming. As a candidate, FPGA has been noticed across the industry because of its performance-per-power efficiency, re-programmable flexibility and wide range of applicableness. Various IP developed on FPGA could potentially boost growing big data and AI workload on the platform. However, there are gaps to adopt FPGA in big data platform like resource scheduling, isolation and so on. In this session, we would like to introduce a new feature in Yarn to treat FPGA as 1st class citizen resource in Yarn. We will explain the idea in detail that covers how it works, what the feature provided, and why you should use FPGA-as-a-service in big data platforms such as Yarn or Kubernetes. By using this feature, big data applications can request FPGA resources easier and use them exclusively. Furthermore, we will give some examples such as basic matrix calculation, data compression/decompression, and deep learning workloads to demonstrate how these workloads can get the advantages from this feature. And what we are experimenting to put the same features in Docker/Kubernetes environment. In this topic, the audience can learn below things: How FPGA as 1st citizen in Yarn(YARN-5983)/Kubernetes works Current status and issues update -Our experience and customer case sharing Session hashtag: #SAISEco9
In Big Data area, ETL(Extract, Load, Transform) is important data processing procedure to transfer raw data from a source server to a data warehouse. Field-programmable gate array(FPGA) with highly customized intellectual property(IP) can not only bring better performance but also lower power consumption to accelerate computation intensive segments for an application. In this session, we would like to describe how FPGA can help a typical Spark ETL workloads to reduce high CPU utilization issue and release more CPU power to run some compute-intensive jobs. Furthermore, FPGA can also benefits deep learning applications for AI. We will use micro-benchmarks as examples to identify a typical ETL/AI workload profiling, highlight the hotspots during data format transformation, and figure out which function costs higher CPU utilization in ETL procedure. Leveraging FPGA accelerator, we can move the functions with high CPU usages such as data source parsing or data compression/de-compression to use FPGA IPs and keep the CPU resource to only process some mission critical tasks which can improve the performance dramatically. Finally, we will give a real-world workload as a use case to explain how and when to use these FPGA IPs to optimize your Big Data applications. In this topic, you will learn