How to Efficiently Scale Your Data Analytics Team with Databricks
OVERVIEW
EXPERIENCE | In Person |
---|---|
TYPE | Breakout |
TRACK | Data Strategy and Lakehouse Implementation |
INDUSTRY | Enterprise Technology, Financial Services |
TECHNOLOGIES | Delta Lake, Developer Experience, Governance |
SKILL LEVEL | Intermediate |
DURATION | 40 min |
DOWNLOAD SESSION SLIDES |
The right framework for scaling data teams is one that is focused on efficiency rather than pure headcount growth. In this session, we will present an organizational framework influenced by Zhamak Dehghani’s Data Mesh philosophy that helped us successfully scale the data analytics team at Moody’s KYC. We will dive into several aspects of the framework components, such as functional role clarity, centralization, and decentralization, defining data producers and consumers, managing data as products, data governance and access controls, and contributing to business metrics. Lastly, we will touch on how Databricks, which handles both the data engineering and science side of the data world, helped us achieve our strategic goals of unifying our analytical data plane.
SESSION SPEAKERS
Mike Xu
/Director, KYC Data Strategy
Moody's Analytics