Yonggang Hu is Distinguished Engineer, Chief Architect at Platform Computing, IBM. He has been working on distributed computing, grid, cloud and big data for the past 20 years. Before joining Platform Computing, Yonggang was Vice President and Application Architect at JPMorgan Chase focusing on computational analytics and application infrastructure. Yonggang holds MS in Computer Science from Peking University and MBA from Cornell University.
Deep neural network training is time consuming, often take days and weeks, and a hard topic to master. Selecting the right hyper-parameters is difficult, but so important since it directly affects the behavior of the training algorithm and has a significant impact on performance and accuracy. In this talk, we will discuss a novel approach using distributed Spark to explore the vast hyper-parameter search space to find a near optimal configuration according to a targeted quality of service (QoS). Several hyper-parameter and network architecture search approaches will be discussed and compared (e.g., Random, a tree-based Parzen, Bayesian, Reinforcement Learning, ...). Furthermore, we will propose a framework and method to share information across different trials to make the searching process highly efficient. We'll also introduce a real-time monitoring, tuning and optimization mechanism for model training to detect early stop conditions and recommend better hyper-parameters. Finally, we will use real-world models and use cases to demonstrate how hyper-parameter selection and adaptive tuning accelerates model development and training when running Caffe and Tensorflow in our distributed Spark environment.
GPUs have been increasingly used in a broad area of applications, such as machine learning, image processing and risk analytics to achieve higher performance and lower costs (energy footprints). On the other hand, Spark has become a very popular distributed application framework for data processing and complex analytics. This talk will share GPU and Spark usage through the real world examples in cognitive computing and risk analytics in financial services. The synergy between GPU and Spark will be further explained. Finally, it will discuss the challenges in using GPU in production Spark applications and the enhancements to manage and utilize both CPU and GPU resources. Specifically, it will address - Support Spark MLlib on GPU - GPU specific resource group is introduced for facilitate mixed GPU/CPU resource management, e.g. identify the GPU sensitive stages and tasks from Spark DAG Scheduler - Enable GPU Python and Scala API for end user to declare computing logic that user wants to run on GPU devices, including shuffle configuration on transferring memory data between GPU and CPU - Enable GPU and CPU mixed resource scheduling in Spark to get better resource utilization. - Build the slot based and the multi-dimensional based solution for mixed scheduling to maximize resource usage. - Fail back capability, if the GPU fail to finish the job, it should be able to seamless back to CPU workload. - Speeding up capability, try GPU resource in the long tail case.Additional Reading: