Sr. Data Mining Engineer, Bosch
Prasanth Lade is a Sr. Data Mining Engineer at the Bosch Center for Artificial Intelligence in Sunnyvale, California. He holds a PhD in Computer Science, specializing in machine learning, from Arizona State University, where he focused his research on applying machine learning in technologies aimed towards individuals with visual impairment. At BCAI, Prasanth works on leveraging Big Data from manufacturing plants and applying AI at scale across Bosch’s products and services. His areas of interest include manufacturing analytics, natural language processing, and time series analysis.
The Bosch Center for Artificial Intelligence provides AI services to Bosch's business units and manufacturing plants. We strive to generate value for our customers by deploying machine learning in their products, services, and processes across different domains such Manufacturing, Engineering, Supply Chain Management, as well as Intelligent Services. In our presentation we will discuss how we utilize Spark
- to prepare and analyze manufacturing data from hundreds of assembly lines
- to forecast thousands of monthly financial time series We will also share some of the learning points we have gathered as we have scaled up our use cases and explain the important role Spark has played in our internal operations and data collection, storage, and processing strategy.
Extended Description of the use cases to be presented:
- Manufacturing Analytics: Bosch's Manufacturing Analytics Solution (MAS) provides a self-serve access to Bosch's entire production manufacturing data. Through MAS, engineers, analysts, and data scientists are able to monitor key performance indicators, perform independent analysis, and troubleshoot complex production issues. Additionally, we also offer AI services, built on top of the data, to monitor and improve product quality, as well as to prevent failures and make processes more efficient. We extensively use both Scala and Python APIs and will share details about how we handle feature engineering for data from complex assembly lines.
- Finance & Controlling: We use Spark to distribute machine learning tasks for our financial forecasting use case, which requires a delivery of forecasts for thousands of monthly time series using machine learning algorithms. We use Spark to distribute the workload of these forecasts in order to meet the business needs. We will present details about the infrastructure we use, pipelines we have built, and the distribution of the models using Spark.