Ding Ding - Databricks

Ding Ding

Software Engineer, Intel

Ding Ding is a software engineer on Intel’s big data technology team, where she works on developing and optimizing distributed machine learning and deep learning algorithms on Apache Spark, focusing particularly on large-scale analytical applications and infrastructure on Spark.


Accelerating Deep Learning Training with BigDL and Drizzle on Apache SparkSummit 2018

The BigDL framework scales deep learning for large data sets using Apache Spark. However there is significant scheduling overhead from Spark when running BigDL at large scale. In this talk we propose a new parameter manager implementation that along with coarse-grained scheduling can provide significant speedups for deep learning models like Inception, VGG etc. Aggregation functions like reduce or treeReduce that are used for parameter aggregation in Apache Spark (and the original MapReduce) are slow as the centralized scheduling and driver network bandwidth become a bottleneck especially in large clusters. To reduce the overhead of parameter aggregation and allow for near-linear scaling, we introduce a new AllReduce operation, a part of the parameter manager in BigDL which is built directly on top of the BlockManager in Apache Spark. AllReduce in BigDL uses a peer-to-peer mechanism to synchronize and aggregate parameters. During parameter synchronization and aggregation, all nodes in the cluster play the same role and driver's overhead is eliminated thus enabling near-linear scaling. To address the scheduling overhead we use Drizzle, a recently proposed scheduling framework for Apache Spark. Currently, Spark uses a BSP computation model, and notifies the scheduler at the end of each task. Invoking the scheduler at the end of each task adds overheads and results in decreased throughput and increased latency. Drizzle introduces group scheduling, where multiple iterations (or a group) of iterations are scheduled at once. This helps decouple the granularity of task execution from scheduling and amortizes the costs of task serialization and launch. Finally we will present results from using the new AllReduce operation and Drizzle on a number of common deep learning models including VGG and Inception. Our benchmarks run on Amazon EC2 and Google DataProc will show the speedups and scalability of our implementation. Session hashtag: #DLSAIS13


Embrace Sparsity At Web Scale: Apache Spark MLlib Algorithms Optimization For Sparse DataSummit 2016

From purchase history to movie ratings, data sparsity has always been one of the primary characteristics of big data. Powerful as Apache Spark is on parallel processing for the partitioned data, many of the algorithms in MLlib are implemented based on the assumption of certain degree of data density, such like the gradients of logistic regression, or cluster centers of KMeans. Yet during collaboration with some internet companies, we often find their feature number at the dimension of millions or even billions, which far exceeds the capacity of some important algorithms in MLlib, or become impractical due to enormous memory consumption even with great sparsity in the training data. To fill the gap, we present a Spark package containing some major improvements we have conducted to support the sparse data at large scope. Through optimization on data structure, network communication and arithmetic operation, we can extensively compress the memory consumption and reduce computation cost for sparse data, thus to enable the algorithms on larger feature dimensions and scope. Two of the examples are the successful support of our implementation on logistic regression with 1 billion features and KMeans with 10M features and hundreds of clusters. We'll also share some work we are contributing to Spark and some best practices we have accumulated in the context of sparse data support on Spark MLlib.