Ted is working on the Battle.net team at Blizzard, helping support great titles like World of Warcraft, Overwatch, HearthStone, and much more. Previously, he was a Principal Solutions Architect at Cloudera, helping clients be successful with Hadoop and the Hadoop ecosystem. Previously, he was a Lead Architect at the Financial Industry Regulatory Authority (FINRA). He has also contributed code to Apache Flume, Apache Avro, Apache Yarn, Apache HDFS, Apache Spark, Apache Sqoop, and many more. Ted is also a co-author or O’Reilly “Hadoop Application Architectures” and a frequent speaker at many conferences, and a frequent blogger on data architectures.
In the world of distributed computing, Spark has simplified development and open the doors for many to start writing distributed programs. Folks with little to none distributed coding experience can now start writing just a couple lines of code that will get 100s or 1000s of machines, immediately, working on creating business value. However, even through Spark code is easy to write and read, that doesn't mean that users don't run into issues of long running, slow performing jobs or out of memory errors. Thankfully most of the issues with using Spark have nothing to do with Spark but the approach we take when using it. This session will go over the top 5 things that we've seen in the field that prevent people from getting the most out of their Spark clusters. When some of these issues are addressed, it is not uncommon to see the same job running 10x or 100x faster with the same clusters, the same data, just a different approach.
Traveling to different companies and building out a number of Spark solutions, I have found that there is a lack of knowledge around how to unit test Spark applications. In this talk we will address that by walking through examples for unit testing, Spark Core, Spark MlLib, Spark GraphX, Spark SQL, and Spark Streaming. We will build and run the unit tests in real time and show additional how to debug Spark as easier as any other Java process. The end goal is to encourage more developers to build unit tests along side their Spark applications to increase velocity of development, increase stability and production quality.
So you know you want to write a streaming app but any non-trivial streaming app developer would have to think about these questions:How do I manage offsets? How do I manage state? How do I make my spark streaming job resilient to failures? Can I avoid some failures? How do I gracefully shutdown my streaming job? How do I monitor and manage (e.g. re-try logic) streaming job? How can I better manage the DAG in my streaming job? When to use checkpointing and for what? When not to use checkpointing? Do I need a WAL when using streaming data source? Why? When don't I need one? In this talk, we'll share practices that no one talks about when you start writing your streaming app, but you'll inevitably need to learn along the way.
If you have a parent child relationship or a many to many relationship in your data model you will want to learn about nested dataset functionality in Spark. Ted Malaska (co-author of Hadoop Application Architecture) will walk through why nested types may change your life in solving common problems like large joins and even cartesian joins. This talk will include a full code example of create nested tables with Spark SQL, populating them those tables, and finally accessing them through a number of ways.
It is one thing to write an Apache Spark application that gets you to an answer. It's another thing to know you used all the tricks in the book to make you run, run as fast as possible. This session will focus on those tricks. Discover patterns and approaches that may not be apparent at first glance, but that can be game-changing when applied to your use cases. You'll learn about nested Types, multi threading, skew, reducing, cartesian joins and fun stuff like that.hreading, skew, reducing, cartesian joins, and fun stuff like that. Session hashtag: #SFdev13
So you know you want to write a streaming app, but any non-trivial streaming app developer would have to think about these questions: - How do I manage offsets? - How do I manage state? - How do I make my Spark Streaming job resilient to failures? Can I avoid some failures? - How do I gracefully shutdown my streaming job? - How do I monitor and manage my streaming job (i.e. re-try logic)? - How can I better manage the DAG in my streaming job? - When do I use checkpointing, and for what? When should I not use checkpointing? - Do I need a WAL when using a streaming data source? Why? When don't I need one? This session will share practices that no one talks about when you start writing your streaming app, but you'll inevitably need to learn along the way. Session hashtag: #SFdev5