Join our team and experience Workday! It’s fun to work in a company where people truly believe in what they’re doing. At Workday, we’re committed to bringing passion and customer focus to the business of enterprise applications. We work hard, and we’re serious about what we do. But we like to have a good time, too.
In fact, we run our company with that principle in mind every day: One of our core values is fun.
The Role As part of Workday’s Analytics team, you will be responsible for building, enhancing and extending our Spark-based large scale distributed data processing engine in the cloud. You will work alongside a top-notch team to architect and build features in Spark representing our: High-velocity hybrid transactional/analytical engine Low-latency interactive engine Large scale cube builder engine. Developing data processing algorithms and techniques that work on large datasets, and algorithms that facilitate interactive querying experience over large volumes of data will be among your primary responsibilities.
You are an engineer, passionate about data management and distributed data processing frameworks and algorithms, with the hands-on experience working in Apache Spark internals. Performance, scalability and reliability are not an afterthought for you. You take pride in building algorithms that are not only scalable and efficient but also require no tuning. You think and code in terms of well-defined abstractions, relational algebra and operators. You take pride in building high-performance infrastructure and platforms that are capable of processing large volumes of data while ensuring simplicity. You enjoy coming up with novel solutions and can clearly articulate their value to stakeholders. You also excel in the ability to juggle priorities and make the right trade-offs in feature content and timely delivery of features while ensuring customer success.
Excellent coding skills: Java, Scala, and Linux expertise
Knowledge of Apache Spark and Spark SQL internals
Expertise in distributed systems and distributed data processing engines
Background in database internals and query processing, and a strong understanding of SQL Expertise in one or more of: Hadoop YARN, Kubernetes or Mesos
A BS in Computer Science with 5 years of experience, MS in Computer Science with 2 years of experience, or a PhD in a relevant area.
Open source contributions, preferably as contributor to Apache Spark