Join financial services industry leader, Junta Nakai, and Databricks solution architects as they share how institutional investors can leverage the power of data and AI to mitigate risk and gain a competitive advantage. We will also be joined Martin Williams, Head of Reference Data Business Development at ICE, to discuss how ESG data empowers investors to make more informed investment decisions.
Enterprises are increasingly turning to a Lakehouse architecture for data management. This technology allows them to combine the benefits of flexibility and near-infinite scalability of the cloud with the governance and reliability of traditional data warehouses. The Databricks Lakehouse is built on Delta Lake, an open source storage layer that brings reliability to data lakes with ACID transactions, scalable metadata handling, as well as unified streaming and batch data processing. It enables companies to cost-effectively unlock all their SAP ERP data for analytical purposes, and to combine them with live data sources such as IOT devices.
In this session, Databricks Global Manufacturing and Distribution Leader, Rob Saker, will lead an in-depth discussion on how manufacturers are transforming their business with data and AI, taking advantage of internal and external data sources across a range of unstructured, semistructured and structured data sources to deliver timely insights in near real-time.
Databricks Media and Entertainment Industry Leader, Steve Sobel, will be joined by SEGA Europe’ Data Services Manager, Felix Baker, and Chris Lang, Head of Game Data at Kolibri Games, for an in-depth conversation on the state of the sector, plus presentations on the latest innovations and use cases from each company. We’ll end the event with an open discussion and questions from the audience.
Do you want to take the next step in your career as a Data Engineer? Perhaps you're just getting started in the rapidly expanding field of Data Science and Analytics. Maybe you're already doing on-premises analytics and want to move to the cloud. Join us in this introductory session and learn how to create your first Azure Databricks cluster, use notebooks, and work with data within Databricks.
Please join Databricks at the FCW’s 2021 Data & Analytics Summit to learn more about these essential efforts. Participants in this virtual event will come away with a better understanding of data governance and compliance, the importance of cloud-based data services, the next steps for the Federal Data Strategy, and the technology behind data mining and analysis.
In this live workshop, we’ll share how organizations in the public sector can modernize their analytics architecture with a lakehouse in the cloud on AWS. You’ll learn best practices for bringing reliability, performance and security to your data lake that create the perfect foundation for a cost-effective, highly scalable lakehouse architecture.
Join Databricks at State HITC Connect to learn best practices for using modern technologies to simplify and scale your data analytics and machine learning efforts. We’ll also share how to efficiently and affordably build a HIPAA-compliant, scalable cloud “lakehouse” for all of your data - structured, unstructured, streaming, etc - and analytics use cases. Jack Fletcher, retired IT lead from CMS, will join us to discuss how he helped lead an effort to modernize the CMS data architecture and leverage population-scale insights to improve the delivery of health services.
Struggling to make your data lake analytics-ready? Join us at the upcoming AWS Summit Online ANZ and hear how Databricks Lakehouse is simplifying the common struggles of data teams by streamlining data processing workflows and making data ready to use for analytics, data science and machine learning.
Labeled data powers machine learning, but labels are sometimes expensive to create. Active learning helps guide labeling intelligently for lower cost and better models. This talk will show how to apply modAL, a Python library for active learning, to a data set of customer complaints to make manual labeling of the data less painful and apply these tools with Apache Spark, and open source tools like Hyperopt and mlflow.