Simplify and Boost Spark 3 Deployments with Hypervisor-Native Kubernetes

Download Slides

In 2020, two significant IT platforms converge. On the one hand, Spark 3 becomes available with the support of Kubernetes as a scheduler. On the other hand, VMware releases project Pacific which is an industry-grade Kubernetes that is natively integrated with the VMware vSphere 7 hypervisor. In this session, we present a reference architecture that integrates these two platforms. With the integration of Spark 3 and VMware Pacific, Spark clusters get deployed on the same Kubernetes + virtual machines platform that is used by dozens of thousands of companies across the world. These are some of the main benefits:

  • Scalable and straightforward deployments: With Kubernetes, IT operations teams may add more nodes via some simple command-line instructions to the clusters to expand their capacity. VMware Pacific delivers enterprise-grade Kubernetes to run Spark clusters after a few simple setup steps.
  • Reliability: Kubernetes takes care of keeping the specified number of workers, even after hardware failures occur. It can self-heal and bring services back on track in a matter of seconds. Spark clusters may get a significant boost in their levels of availability.
  • Predictable performance: VMware Pacific delivers a Kubernetes runtime native to the vSphere hypervisor, which has proven to be more efficient and scalable than the Linux OS to schedule containers. High performance and throughput are paramount attributes for a big-data analytics platform such as Spark.

Session elements:

  • Introduction to VMware Pacific’s Kubernetes architecture
  • How to bring up a hypervisor-native Kubernetes cluster using VMware Pacific (demo).
  • Architecture and sizing configuration of vSphere and Kubernetes to run Spark 3.
  • How to deploy and configure Spark 3 (demo)
  • Running Machine Learning tasks using ML libraries from Spark 3 (demo)
  • Q&A


 

Databricks testen

Video Transcript

– Hi and welcome to this talk on Spark Kubernetes and VMware vSphere. My name is Justin Murray and I’ll be introducing my co-speaker in one second and we’re very glad to be here at the Spark AI Summit 2020. Our title today is “Simplify and Boost Apache Spark “Deployments with Hypervisor-Native Kubernetes.”

Simplify and Boost Apache Spark Deployments with Hypervisor-Native

Well that’s quite a mouthful but we’re going to be talking here about a very tight link between Kubernetes and the VMware hypervisor as a basis for running Apache Spark. So my co-speaker on the next slide is Enrique Corro and I’ll ask Enrique to introduce himself briefly here. – Thank you Justin. Hello everyone. Thank you for joining us today. My name is Enrique Corro. I work for the office of CTO at VMware as a data science engineer. I’m super happy to be here with Justin and all of you, thank you. – Thanks again. And I belong to the Cloud Services Business Unit within VMware which is actually running vSphere technology, our core hypervisor technology, on VMware cloud on AWS which we’ll mention briefly here as well as running it on premises and that’ll be a little novelty in this talk for you here. So let’s move to the next one.

Motivation Behind An Agile Hybrid Cloud Infra

Okay. So what’s our motivation for this talk? VMware as you know has been in business for about 20 years or so and has really served the needs of the IT administrator to be quite frank. We’ve given scalable infrastructure, hybrid cloud infrastructure and by that I mean both on-premises and in the cloud on VMA cloud, on AWS and on other hyper scalars. We’ve given that infrastructure for years and made it easy to manage and easy for users to consume that infrastructure but we know there’s a community of developers out there on the next section and those developers really need a place to build their applications that is a reliable scalable and cost-efficient way to do that and largely developers and DevOps people are building containers today. So they’re using tools of their own to do that and the major platform on which you’re running containers is Kubernetes. So this talk really centers around running spark and kubernetes together which you could argue is displacing Hadoop from its traditional Big Data role but what we’ve done here in the vSphere 7 release is integrated those kubernetes services that you see on the bottom left here with all of the other services that you find network services, compute services, storage services, they’re all first class citizens in VMware. Now kubernetes is a first-class citizen as well. It’s tightly integrated into the control plane of VMware and the idea is to integrate and harmonize the infrastructure for both the administrator and the developer so that the developer can say to a vSphere environment, give me a kubernetes cluster please. I want to use that for about a week or two, I want to build my application and deploy it into kubernetes and then I want to tear it down and put it into production on an even different kubernetes cluster so all of that is now on target for VMware vSphere. And I got to hand over to my colleague Enrique to describe that in more detail. Enrique. – Thank you Justin. Okay. I’m going to talk… I’m going to start by talking about VMware vSphere with kubernetes which is a venue VMware platform designed to bridge the gap between infrastructure and application development.

Introducing vSphere 7 with Kubernetes Transform your infrastructure to build, run and manage modern applications (Spark included)

We have that from version 7 basically incorporates kubernetes as a series of negative processes within the hypervisor. This allows rapid provisioning of developer services such as the container runtime and registry, networking and persistent storage volumes. All these services are consumable from the standard kubernetes API which is very important for developers nowadays. The integration of kubernetes and the hypervisor improves the vSphere administrative productivity and allows IT operations teams to focus on increasing infrastructure attributes such as performance, security, availability, cost and troubleshooting. At the same time, DevOps teams get cell service environments that allow them to call, test and deploy and support modern applications with great agility. Let’s consider the container orchestration approach offered by kubernetes also applies to spark which is officially supported, sorry which is officially supporting kubernetes as an orchestrator for spark 3 version.

Now I will talk about VMware Tanzu, a new platform designed to build, run and manage modern applications such as spark on top of properly managed Enterprise rate kubernetes platforms.

Tanzu Kubernetes Grid (TKG)

At the heart of VMware Tanzu, we have the Tanzu kubernetes breed also known as TKG. The Tanzu kubernetes breed provides a consistent upstream compatible implementation of kubernetes which gets tested, signed and supported by VMware. You can deploy tons of kubernetes grid across your vSphere clusters and also across Amazon EC2 instances. We are working to extend TKG support for multi public cloud providers besides AWS. We also are planning to support multi kubernetes flavors in the future. The Tanzu kubernetes grid has a native of ordinance of multi cluster paradigms and this allows you to manage any number of kubernetes cluster from a centralized location which has many administration advantages.

Managing TKG Cluster From the vSphere 7 UI

Here’s an illustration of how IT operations teams can manage their tanzu kubernetes cluster from the description and user interface. On the Left panel you can see the hierarchical organization of the data center. Following a top-down order, we find the physical hosts grouped by bhisma clusters. Inside is, we see a new grouping component called namespaces. You can think about a namespace as a pool of resources dedicated to one or multiple tanzu kubernetes clusters. The right panel shows that it takes about the status and number of kubernetes clusters running on the cloud infrastructure. The panel also shows the capacity allocated for the namespace and how much of the allocated resources are being used.

TKG on VMware Cloud Foundation 4 Intel

To simplify the deployment of the operations of vSphere 7 and tanzu kubernetes clusters, VMware exclude all the infrastructure pieces together within the hyper platform called VMware Cloud Foundation.

Here, a bird eye view of the physical architecture of the platform you can deploy a cloud foundation on a wide range of supported vendors. In the past two years, we have worked within to develop a hard cloud data analytic solution that leverages different interrelation technologies for machine learning and big data. Spark may greatly benefit from these horrible components to see incremental performance gains. How a foundation integrates a computing, networking and storage layers of the hybrid cloud infrastructure following a standardized validated architecture. This architecture gets automatically deployed and lifecycle using the series of management components included with the solution. The left side of the… The left side of the picture, we see the operations module of cloud foundation called the Management Domain. From that point, IT operations gets all the tools needed to operate a hybrid cloud environment including the tanzu kubernetes clusters. As shown on the right side of the picture development teams such as data engineering and data science can take control of the kubernetes resources using a standard API’s. Here we see a typical view of an end-to-end analytics pipeline with Apache Spark at the core.

With kubernetes clusters available for developers, it is possible to deploy many open-source applications using the Bitnami Helm charts. If you are not familiar with Helm, you can think of it as an open-source package management solution for kubernetes. Helm charts allow you to deploy and remove software using very simple command line instructions. As a compliment, Bitnami continuously monitors and updates a catalog of more than 103 open-source applications to ensure development stacks are always up-to-date and secure. Here we show part of the catalog of open source technologies offered by Bitnami. As you can see on the right, Apache Spark is also part of it.

Now let’s see how all these works together in a brief demo.

Here the review of the demo plot. First we will explore the new kubernetes in the vSphere interface designed to manage kubernetes resources. Then we will deploy a new kubernetes cluster using the command line interface and we will verify the status of this newly created cluster. Then we will explore the bitnami’s Helm charts catalog which includes a part, a chart for Apache Spark. Next we’ll deploy Apache cluster using the Helm chart. Finally we’ll verify the functionality of the newly created spark cluster. Okay. Let’s explore the new kubernetes capability incorporated in the vSphere 7 management interface. Here a view of the cloud infrastructure components. At the top we have data center objects and the typical resources they manage. Within the data center object, we see a new element called namespaces which integrates the kubernetes clusters. From view, you can monitor the status of the kubernetes components, the number of cluster deployments and the resource capacity that the kubernetes clusters are consuming. Now let’s deploy kubernetes cluster named “k8-for-spark” using just one TKG comment.

Here we see the TKG create cluster command. In this case, it is running in dry run mode so we can verify the cluster specification before being built. The specification defines things like the kubernetes version to be used, the configuration of the network and storage services and the number of control plane and worker parts that will support the cluster operation. Now let’s run the TKG command for real to spin off the new cluster. Notice that the manual deployment of kubernetes can take a good bunch of comments and here we only need to run just one comment to do the job. Wait for a moment and the new kubernetes cluster for spark gets created. Now we can use the TKG get command to create the status of the new cluster. We do this several times until we see that the control plane and the two worker thoughts have reported us running. Now let’s verify the kubernetes cluster operation by deploying a couple of nginx pods on it.

It is time to use the cube control command to deploy nginx from our jumbofile. Once script control gets executed, we get confirmation that the nginx pods got deployed. Then we use view controller a couple of times to grid the nginx pods to status until they get reported as running.

Now let’s meet the bitnami catalog of Helm charts which includes charts for Apache Spark.

Bitnami provides a catalog of curated containers and Helm charts for thousands of open-source applications with Apache Spark included. Here we see the options available to deploy spark either on docker or on kubernetes. If you click on the file it takes us to the Github repository for the Spark Helm chart. Here we can see an example of the two Helm comments required to deploy Spark on kubernetes. We can also see that the deployment can be customized by modifying the spark charts configuration parameters. The list of parameters includes things like the image registry, the network service port numbers, CPU memory operations for the master and workers and the number of worker replicas. There is a total of 97 parameters available, they tale of the deployment to your needs.

Now let’s deploy Apache Spark on the kubernetes cluster previously created for this purpose. We will for install Spark using only two Helm comments. We start by adding the bitnami’s charts respository to the local Helm records. Next we proceed to run the film’s stored command to make a new deployment called spark k8. After several seconds we get confirmation that spark got deployed.

We are given some references about how to launch the web UI and also how to submit jobs. Next we use cube control to verify that the spark boards are working. We keep doing this until we see that the master and the workers are all up and running. Then we switch the web UI to verify the sparks state from this interface. Skip the second. We see that no applications are running and not completed because the cluster is new and we have to confirm that the cluster status is alive.

Finally let’s verify that the spark cluster deployed in kubernetes is operational by executing a job. Here we use the cube control execute command to submit a pile number estimation, job available from the examples jar file that comes with spark. Estimation task gets launched for a total of 100 iterations. When iterations get complete the result is printed in the screen as you can see. Then we switch back to the spark web UI and we verify the status of the last application. We click on the app ID. I’m verifying that the job state is finished which indicates that the job concluded in a normal way. – So now that we’ve seen how to deploy spark on kubernetes, let’s take the testing up a little bit into heavier workloads and we did that in our performance engineering lab and I’m going to describe that now so this is testing spark on kubernetes for performance.

Testing Spark on Kubernetes vs Spark Standalone Spark on Kubernetes

So we wanted to test spark on kubernetes versus spark standalone, that is spark running outside of hadoop, outside of yarn just using the spark cluster manager to manage it. So we had the same setup of hardware for both of these, same virtual machines, same hardware, same conditions, same test suite but in one test we were running spark stand-alone and in the subsequent test we were running spark on kubernetes and we were trying to find would there be any impact on performance and also trying to see what benefits do we get from spark running on kubernetes. And as I mentioned before kubernetes is a resource manager so it’s largely taking the place of legacy big data systems here. So here’s the architecture of spark on kubernetes as you see here. In this case we were running the spark submit not to spark master but to the API server in kubernetes which is now acting as the resource manager and we run the spark driver a little bit different to this diagram. We run the spark driver on the same virtual machine as a spark as the kubernetes master but the executors were being spun up on the fly on the spark submit comma. So you’ll see this a bit more in the next slide.

So this is just the same picture blown up. So you can choose whether your spark driver runs in a pod in your kubernetes cluster or in the spark cluster or you can run your driver on the client side, that’s called client mode and we actually use client mode here but the functionality was the same. Client mode allows you to execute remotely from your kubernetes cluster and driver cluster mode would allow you to run the driver within your cluster and have everything together. So the communication that you’re going on here to say schedule a pod et cetera, that’s all being done within the same virtual machine in our kubernetes case here but the executors are running in pods and they’re being fired up on the fly here. So next slide.

Spark on Kubernetes Performance Tests (1/2) A 4-host Cluster that supports Spark Standalone as well as Spark on Kubernetes | | |

So this is the architecture at the hardware level and at the software level, all in one. And the four rows here, host one to host four represent four second generation Intel Xeon “Cascade Lake” servers, quite powerful servers with two sockets in each one, Intel Platinum 8260 and 2.4 gigahertz with hyper-threading on which we recommend you have 96 logical cores or hyper threads and 768 gigabytes of memory. So decent sized machines here but not the biggest machines in the world by any means. On each of those, we ran four spark worker virtual machines and on the first toast we ran the spark master and spark driver together. As I mentioned the spark driver is now outside the cluster to some extent. So spark… For the spark master VM we had eight virtual CPUs and 64 gigs of memory, quite a small virtual machine actually and for the spark workers we gave them a little more power. They had 16 virtual CPUs or V CPUs and 120 gigs of memory each and so in total on the first host, we had four times 120 that’s 480 gigs for the workers and another 64 for the spark master making 544 gigs allocated on that first host. Now we’re going to fill those empty slots on the host two, three and four in when we deploy kubernetes on to this and that’s going to be the next picture that you’ll see. Remember, the same hosts, the same virtual machines in all cases, it’s just now that instead of being just a spark worker, the individual VM’s, four VMs that are look-alike on each host are now kubernetes workers. So same hardware but this time we have three kubernetes masters. This is to simulate highly available system and we have an HAProxy running on post for there in the first VM. So we had three extra… We have three extra virtual machines in this case in the first slot on each host and these kubernetes workers, same sized VMs, the Masters had eight virtual CPUs, the workers had 16 and notice in red on the bottom left hand side, one virtual machine represents one kubernetes worker and we assigned one spark executive pod to each worker node and one or more spark executives of course, can run inside an executor pod. So very simple design here, very simple approach to doing this for uniformity across the two environments. So that’s how we set this up. Now a few notes on the next one.

Spark

The spark-submit which we typically supply to the spark master it can call a kubernetes master instead of a spark master by putting k8s as the prefix to the URL or your URI you’re given. What we did in preparation for that was create a private namespace, just as you do in regular kubernetes, which we call spark and then in that namespace we created a service account also called spark and we created a cluster role binding to allow that role or that service account to actually edit and therefore create pods in the cluster within that namespace. So these are standard procedures that you would apply if you’re setting up or back for your kubernetes cluster. Nothing unusual here.

So we used cluster mode here which is the spark driver runs in the cluster. We also used climbing mode, another experiment. So both worked fine on vSphere. Next slide please. So these are the results of the tests and this was ResNet 50 which is an image classification test running on top of spark with Intel big DL libraries and a program written using Intel big DL as the driver. Enrique mentioned some Intel software at the start. We work closely with Intel on increasing performance. Both running on the same machines with a varying number of virtual machines higher is better on these charts and the blue represents spot stand alone, the orange represents spark and kubernetes. As you can see, they’re within 1% of each other. Now the number of images per second here is very low because this is not GPU enhanced deep learning, this is regular CPU-based deep learning and that’s an experiment to drive a lot of traffic through this rather than a test of deep learning. It’s trying to saturate the system as much as you can, we could and you will see that when we go to the next one but my main point in this section is that performance is roughly the same whether you’re on spark stand-alone, just running in virtual machines or spark running in kubernetes and virtual machines. Okay. So having done that, we wanted to look at some other things and here’s the kubernetes console. And the purpose of showing you this is really to show you under the CPU requests there that these CPUs are working very hard. They’re at 95% and above and also that you can use a standard kubernetes dashboard to look at your virtualized kubernetes just as you would if it was running elsewhere. We also have a console of our own called Tanzu Mission Control and the Tanzu brand that Enrique mentioned at the beginning is a whole family of products including Tanzu Mission Control that can look at your kubernetes clusters whether they’re running on VMware vSphere or running in the cloud on AWS or running on VMware cloud on AWS. Any of those be controlled by tons of Mission Control. Okay. Let’s go to the next one.

From Training Data to Prediction Using Spark

So having done that performance test, now we wanted to go back into training and say could we use spark for training on VMware? And we took an example of a tool here which does training and took the output from that tool which is a Java object and you see this set up here. Actually this is in VMware cloud on AWS and the user interface although I am using the bright background rather than the dark background that Enrique was using, you can tell this is VMware cloud on AWS because right in the center of the screen it shows you the domain in which we’re operating which is US West and then on the top left hand side, the address mentioned is vmwarevmc.com which means this is VMware running on the public cloud on AWS hardware and those six machines on the top left-hand side of the navigation with their IP addresses $10 et cetera, those are physical machines in an AWS data center running VMware vSphere.

H2O Driverless Al Running in a VM on VMware vSphere

But the reason that I highlighted in red here is this is the virtual machine running the machine learning training tool that I’m going to show you in a second. It’s not an unusual virtual machine, it’s just got four virtual CPUs in it and 50 gigs of memory. So it’s not a typical virtual machine is quite a normal one and we brought this across from the on-premises without changing the virtual machine. We run it on premises and then run it on VMware cloud on AWS as well. So here’s the user interface from that tool. It’s a very nice user interface. I’m not going to go through it in detail. This is H2O.ai’s driverless AI tool which does training based on principally tabular data and we wanted to show you two forms of data being processed here. Tabular data is very common in business, image data is what a lot of deep learning is about but a lot of business runs on tables and this is tabular data for credit cards and we’re trying to predict whether somebody would default on their next payment. That’s the left-hand column but I’m not going to go through the details of the training here instead we’re going to hit the deploy button in the middle of the top there and generate a Java object from this training session and deploy it into spark. So when we hit deploy, we get a Java object which is in a stores terminal, terminology called model optimized Java object or a mojo.

After Model Training, Six Steps to Kubernetes Deployment

Having got that pipeline, that mojo you see it on the third line of the docker file on your right hand side there, we’re going to copy that pipeline model optimized Java object, mojo, we’re gonna copy that into our container and then we’re going to run a rest server in which this is going to execute, that pipeline is going to execute just for testing purposes, just to simulate the life of a data scientist here. So we created our docker image, we tagged it, we pushed it to a repository. By the way there’s a repository inside VMware’s kubernetes as well called harbor, part of the Tanzu family and then we tested that docker container on its own by simply doing a docker run. But more interesting than that was deploying that same thing, that same container image into kubernetes and you can see a kubernetes kop cut will apply there on the second from last line and the kop cut will apply node port which first one deploys the score that we’ve just made into a docker image and the second one surrounds it with a node ports as we can get at it from outside the kubernetes cluster. So this is simulating what a data scientist might do just to bring up a test in kubernetes of their future spark object or future spark container. So now let’s move on to a more serious deployment of that in what’s known as and this by the way is the rest server running and the lines at the bottom indicates that the prediction, the score is running so this is spring boot executing a rest server within the container and being executed against the VM here so.

Scorer Deployed in a RESI Server using Spring Boot

N

Now let’s go back to spark and h2o happens to have a flavor of their technology that works with spark is called sparkling water, sparkling h2o and you see it here and we deployed that same pipeline, that same model optimized Java object into sparkling water and into standalone spark, both running under virtual machines with kubernetes. So this proved to us that the end-to-end from training right through to model deployment could be done on spark, on VMware and spark is typically used in training but we used it here for inference as well as training just to show that that could be done. So that’s sparkling water and standalone spark.

Scorer runs in Spark Shell: Predictions Output

And finally here, what came out of that predictor or that score was the set of rows that you see in the middle of the screen and the set of rows you see in the bottom of the screen, they both have default payment next dot zero and dot one as their titles. Dot zero means no default for that particular customer in the next month. Dot one means there is a potential for default for that customer in the next month. Just to show you the score actually working based on the training that we did in h2O’s tool driver, let’s say I at the beginning and that by the way is among a set of tools for automated ML that we encourage our parties or third-party companies to work with us on.

All right. So now to conclude what you saw from the very beginning from Enrique’s section was a unified hybrid cloud platform, we call that VMware Cloud Foundation or VCF. It runs both on premises and in VMware Cloud on AWS and other clouds. It gives you the ability, the agility of kubernetes with enterprise capable of vSphere. Many, many thousands of companies run VMware vSphere to support all their applications today. Now they can run kubernetes on there in an integrated way and run spark on top of kubernetes. That gives a pretty compelling development and deployment system. Kubernetes definitely simplifies our methods of deployments of spark. The spark workers came up with spark automatically in the kubernetes case, they had to be installed in the standalone case with spark itself. You can easily get started with spark using the Bitnami Helm charts, Enrique showed you that in his demo and then we went on to move to testing performance of spark on VMs with and without kubernetes and they’re about equal within 1% of each other and kubernetes definitely from our perspective is becoming the method of choice for deploying both training and inference parts of machine learning and both machine learning and deep learning applications. We’ve also tested deep learning applications on kubernetes, deploy very well onto vSphere, on to kubernetes with vSphere. And just a reminder that the world is not all about deep learning, there’s a lot of tabular structured data in the world that also should be an important part of your machine learning deployments and we showed that in our demo here with the h2o tool. Okay. So all of what I described in the performance part is given in this first URL here. We’ll come back to this URL so you can take a picture of it. There’s a general blog site at VMware called blogs.vmware.com/apps/ml for machine learning. You can find tons and tons of information there about how to use GPUs with VMware, how to do spark on VMware and we’ve also done a lot of testing of Hadoop and spark together on VMware as well as the standalone spot that you saw earlier and we’ve got many papers written about Big Data and vSphere in general that you can see here in the last three references. Okay. So please give us your feedback. We welcome your feedback and questions and we look forward to your questions after we’ve done here. Please rate the session and review them for us so as we can improve this for the next time. We really appreciate your attention to us here. And so on behalf of my colleague Enrique Corro Fuentes from VMware’s office of the CTO and Justin Murray here, thank you very much for your time and we’ll get your questions coming up.


 
Databricks testen
« back
About Enrique Corro

VMware

Enrique Corro has worked for VMware since 2006. Currently, he acts as a Staff Engineer focused on Data Science at the VMware's Office of the CTO. Enrique is part of the team that drives new types of integrations between VMware and other IT industry-leading companies to facilitate the adoption of Machine Learning and Artificial Intelligence by companies of any size and industry. Enrique is currently undergoing a Masters Degree Program in Data Science with the University of Illinois.

About Justin Murray

VMware

Justin Murray works as a Technical Marketing Manager at VMware . Justin creates technical material and gives guidance to customers and the VMware field organization to promote the virtualization of big data workloads on VMware's vSphere platform. Justin has worked closely with VMware's partner ISVs (Independent Software Vendors) to ensure their products work well on vSphere and continues to bring best practices to the field as the customer base for big data expands.