Haohai Ma - Databricks

Haohai Ma

Software Architect, IBM

Haohai Ma is a Software Architect at Spectrum Computing, IBM. In the past 10 years, he is the expertise on HPC, SOA, MapReduce, Yarn, Spark , Docker and Kubernetes. He is one of the major contributors of IBM BigData offerings. Haohai holds PhD in Computer Science from Peking University

UPCOMING SESSIONS

PAST SESSIONS

Running Spark Inside Docker Containers: From Workload to ClusterSummit Europe 2017

This presentation describes the journey we went through in containerizing Spark workload into multiple elastic Spark clusters in a multi-tenant kubernetes environment. Initially we deployed Spark binaries onto a host-level filesystem, and then the Spark drivers, executors and master can transparently migrate to run inside a Docker container by automatically mounting host-level volumes. In this environment, we do not need to prepare a specific Spark image in order to run Spark workload in containers. We then utilized Kubernetes helm charts to deploy a Spark cluster. The administrator could further create a Spark instance group for each tenant. A Spark instance group, which is akin to the Spark notion of a tenant, is logically an independent kingdom for a tenant's Spark applications in which they own dedicated Spark masters, history server, shuffle service and notebooks. Once a Spark instance group is created, it automatically generates its image and commits to a specified repository. Meanwhile, from Kubernetes' perspective, each Spark instance group is a first-class deployment and thus the administrator can scale up/down its size according to the tenant's SLA and demand. In a cloud-based data center, each Spark cluster can provide a Spark as a service while sharing the Kubernetes cluster. Each tenant that is registered into the service gets a fully isolated Spark instance group. In an on-prem Kubernetes cluster, each Spark cluster can map to a Business Unit, and thus each user in the BU can get a dedicated Spark instance group. The next step on this journey will address the resource sharing across Spark instance groups by leveraging new Kubernetes' features (Kubernetes31068/9), as well as the Elastic workload containers depending on job demands (Spark18278). Demo: https://www.youtube.com/watch?v=eFYu6o3-Ea4&t=5s Session hashtag: #EUeco5

Learn more:
  • Lessons Learned From Running Spark On Docker
  • Apache Spark on Kubernetes