Kelvin is a founding member of the Hadoop team at Uber. He is creating tools and services on top of Spark to support multi-tenancy and large scale computation-intensive applications. He is creator and lead engineer of Spark Uber Development Kit, Paricon and SparkPlug services which are main initiatives of Spark Compute at Uber. At Ooyala, he was co-creator of Spark Job Server which was an open source RESTful server for submitting, running, and managing Spark jobs, jars and contexts. He implemented real-time video analytics engines on top of it by datacube materializations via RDD.
We would like to share with you the innovative ways that we use Spark at Ooyala, together with Apache Cassandra, to tackle interactive analytics and OLAP applications. In particular, we are turning Spark into a Service with our Spark Job Server. The job server has been a big help to our development efforts, providing a single, REST API for:
As your enterprise starts deploying more and more Spark jobs, you will discover many common issues: deploying job jars; managing job lifecycles and progress; serializing and processing job results; keeping track of failures, job statuses, and jars; managing Spark contexts for fast interactive jobs. Also, every Job is an application with its own interface and parameters. Submitting and running jobs Hadoop-style just doesn’t work. Our open-source Spark Job Server offers a RESTful API for managing Spark jobs, jars, and contexts, turning Spark into an easy-to-use service, and offering a uniform API for all jobs. We will talk about our job server, its APIs, current and upcoming features in much greater detail. Learn how the Spark Job Server can turn Spark into a easy to use service for your organization. As a developer, learn how the job server can let you focus on the job algorithm instead of on nitty gritty infrastructure details.