
Azure Databricks Cloud Architecture and System Integration Fundamentals
Summary
This course introduces students to how Databricks fits into the Azure ecosystem and highlights integrations with a number of first-party Azure services.
Description
While the Databricks Unified Analytics Platform provides a broad range of functionality to many members of data teams, it is through integrations with other services that most cloud-native applications will achieve results desired by customers. This course is designed to help students understand the portions of cloud workloads appropriate for Databricks, and highlight integrations with first-party services in the Azure cloud to build scalable and secure applications.
Learning objectives
- Describe use-cases for Azure Databricks in an enterprise cloud architecture.
- Configure secure connections to data in an Azure storage account.
- Configure connections from Databricks to various first-party tools, including Synapse, Key Vault, Event Hubs, and CosmosDB.
- Configure Azure Data Factory to trigger production jobs on Databricks.
- Trigger CI/CD workloads on Databricks assets using Azure DevOps.
Prerequisites
- Beginning knowledge of Spark programming (reading/writing data, batch and streaming jobs, transformations and actions).
- Beginning-level experience using Python or Scala to perform basic control flow operations.
- Familiarity with navigation and resource configuration in the Azure Portal.
Learning path
- This course is part of the data engineer learning path.
Proof of completion
- Upon 80% completion of this course, you will receive a proof of completion.