The Bosch Center for Artificial Intelligence provides services in AI technologies to Bosch’s business units and plants. Artificial intelligence technologies are impacting Bosch’s products and services in many domains: manufacturing, predictive maintenance, vehicle diagnostics, supply chain, large-scale simulations, etc. This is a technical position for someone who is skilled at bringing together disparate technologies to solve business problems. Our team is responsible for streaming Bosch data to centralized analytics platforms and building data-based services for a wide variety of Bosch engineering and research teams.
We are looking for a talented engineer who is passionate about building fault-tolerant data services and analytics tools. Your work will be used by hundreds of Bosch engineers and have global impact by improving the quality and value of Bosch products.
• Design and implement fault-tolerant data pipelines to integrate large amounts of data from many diverse storage systems.
• Promote a culture of self-serve data analytics by minimizing technical barriers to data access and understanding.
• Execute complex data engineering projects that have a significant impact on Bosch global business.
• Share knowledge by clearly articulating results and ideas to customers, managers, and key decision makers.
• Stay current with the latest research and technology and communicate your knowledge throughout the enterprise
• Take responsibility for preparing data for analysis and provide critical feedback on issues of data integrity
• Up to 10% travel may be required.
• MS in Computer Science
• 2+ years of in-depth knowledge and hands-on experience with distributed systems
• 2+ years of in-depth knowledge and hands-on programming skills in Scala or Java
• Strong understanding in tuning and performance optimization of Apache Spark jobs
• Experience with integration of data from multiple data sources
• Experience with various messaging systems, such as Kafka or RabbitMQ
• Experience managing and solving ongoing issues with a Spark/Hadoop cluster