Skip to main content

The Executive’s Guide to Data, Analytics and AI Transformation, Part 6: Allocate, monitor and optimize costs

Chris D’Agostino
Mimi Park
Usman Zubair
Share this post

This is part six of a multi-part series to share key insights and tactics with Senior Executives leading data and AI transformation initiatives. You can read part five of the series here.

Beginning in 1987, Southwest Airlines famously standardized on flying a single airplane type — the Boeing 737 class of aircraft. This decision allowed the airline to save on both operations and maintenance — requiring only one type of simulator to train pilots, streamlining their spare parts supply chain and maintaining a more manageable parts inventory. Their pilots and maintenance crews were effectively interchangeable in case anyone ever called in sick or missed a connection. The key takeaway is that in order to reduce costs and increase efficiency, Southwest created their own version of a unified platform — getting all their flight-related personas to collaborate and operate from the same point of view. Lessons learned on the platform could be easily shared and reused by other members of the team. The more the team used the unified platform, the more they collaborated and their level of expertise increased.

Reduce complexity and cost

Architectures of enterprise data warehouses (EDWs) and data lakes have proven to be limited and complex — resulting in increased time-to-market and costs. This was mainly due to requirements to perform ETL in order to explore data in the EDW or the need to split data using multiple pipelines for the data lake. The Data Lakehouse architecture simplifies the cost allocation because all the processing, serving and analytics are performed in a single compute layer.

Organizations can right-size the data environments and control cost using policies. The centralized and consistent approach to security, auditing and monitoring makes it easier to spot inefficiencies and bottlenecks in the data ecosystem. Performance improvements can be gained quickly as more platform expertise is developed within the workforce.

The Databricks Lakehouse platform optimizes cost for your data and AI workloads by intelligently provisioning infrastructure only as you need it. Customers can establish policies that govern the size of clusters based on DEV, TEST, PROD environments or anticipated workloads.

Centralized funding model

As previously mentioned, data transformation initiatives require substantial funding. Centralizing the budget under the CDO provides consistency and visibility into how funds are allocated and spent — increasing the likelihood of a positive ROI. Funding at the beginning of the initiative will be significantly higher than the funding in the out-years. It’s not uncommon to see 3- to 5-year project plans for larger organizations. Funding for years 1 and 2 is often reduced in years 3 and 4 and further reduced in year 5 — until it reaches a steadystate that is more sustainable.

The budget takes into account the cost of the data engineering function, commercial software licenses and building out the center of excellence to accelerate the data science capabilities of the organization. Again, the CDO must partner closely with the CIO and the enterprise architect to make sure that the resources are focused on the overall implementation plan and to make sound build vs. buy decisions.

It’s common to see the full budget controlled by the CDO, with a significant portion allocated to resources in the CIO’s organization to perform the data engineering tasks. The data science community reports into the CDO and is matrixed into the lines of business in order to better understand the business drivers and the data sets. Finally, investing in data governance cannot wait until the company has suffered from a major regulatory challenge, a data breach or some other serious defense-related problem. CDOs should spend the necessary time to educate leaders throughout the organization on the value of data governance.

Chargeback models

To establish the centralized budget to fund the data transformation initiative, some organizations impose a “tax” on each part of the organization — based on size as well as profit and loss. This base-level funding should be used to build the data engineering and data science teams needed to deploy the building blocks of the new data ecosystem. However, as different teams, departments and business units begin using the new data ecosystem, the infrastructure costs, both compute and storage, will begin to grow. The costs will not be evenly distributed, due to different levels of usage from the various parts of the organization. The groups with the heavier usage should obviously cover their pro rata share of the costs. This requires the ability to monitor and track usage — not only based on compute but also on the amount of data generated and consumed. This so-called chargeback model is an effective and fair way to cover the cost deltas over and above the base-level funding.

Plus, not all the departments or lines of business will require the same level of compute power or fault tolerance. The architecture should support the ability to separate out the runtime portions of the data ecosystem and isolate the workloads based on the specific SLAs for the use cases in each environment. Some workloads cannot fail and their SLAs will require full redundancy, thus increasing the number of nodes in the cluster or even requiring multiple clusters operating in different cloud regions. In contrast, less critical workloads that can fail and be restarted can run on less costly infrastructure. This makes it easier to better manage the ecosystem by avoiding a one-size-fits-all approach and allocating costs to where the performance is needed most.

The modern data architecture using Databricks Lakehouse makes it easy to monitor and record usage and allows organizations to easily track costs on a data and AI workload basis. This provides the ability to implement an enterprise-wide chargeback mode and put in place appropriate spending limits.

To learn how you can establish a centralized and cohesive data management, data science and data governance platform for your enterprise, please contact us today.

This blog post, part of a multi-part series for senior executives, has been adapted from the Databricks’ eBook Transform and Scale Your Organization With Data and AI. Access the full content here.

Implementing a successful data strategy requires a thoughtful approach to people and processes. Join us at the Data & AI Summit from June 26-29 to find out how to align goals, identify the right use cases, organize and enable teams, mitigate risk and operate at scale so you can be even more successful with data, analytics and AI.

Try Databricks for free

Related posts

Best Practices for Cost Management on Databricks

October 17, 2022 by Tomasz Bacewicz and Greg Wood in
This blog is part of our Admin Essentials series, where we'll focus on topics important to those managing and maintaining Databricks environments. Keep...

The Executive’s Guide to Data, Analytics and AI Transformation, Part 4: Democratize access to quality data with governance

This is part four of a multi-part series to share key insights and tactics with Senior Executives leading data and AI transformation initiatives...

The Executive’s Guide to Data, Analytics and AI Transformation, Part 3: Build Successful Operating Models for Data Teams

This is part three of a multi-part series to share key insights and tactics with Senior Executives leading data and AI transformation initiatives...
See all Data Strategy posts