Skip to main content

Every business wants to be a data and AI vanguard. But to make that happen, companies must commit to a GenAI vision and strategy and then break down that vision into manageable steps. In other words: companies need the right processes to unlock and unleash AI across their businesses. 

As we’ve covered in past blogs, how enterprises manage their people and underlying IT foundation are important steps in the Generative AI adoption journey. However, setting the right processes is the way companies connect their workforce and technology. This is the only way to actually drive outcomes that push the business forward in its goals. It’s one thing to adopt new systems. It’s another to actually get value from them. 

Businesses must ensure they’ve built environments where AI can thrive. They must eliminate technical hurdles to adoption. Employees must feel confident using these new systems and supported by management through the transition. Leaders must pick the right projects to showcase the power of AI – and then empower internal development teams to move quickly in executing on the vision. And ultimately, enterprises must learn how to evaluate their progress in meeting data and AI goals to secure continued investment. 

Here’s how companies can create repeatable and scalable workflows that enable users to quickly turn bleeding-edge innovation from experimentation to reality.  

GenAI requires good data governance 

Data is the catalyst for AI, so it must be a foundational component of the process. Few aspects of data management are as critical as controlling how data is collected and who can access it—or what’s commonly called data governance

In this new AI era, governing the data becomes as important as protecting it. A security barrier is important, but so is managing everything that actually happens inside those walls. Analytics, real-time applications or GenAI — it all comes back to data governance. 

As AI extends through the enterprise, companies must be able to track how data is moving through the organization, who is using it and for what purpose. Governments are already imposing new requirements around AI transparency and explainability – and more are expected to take action in the coming years. Transparency and explainability appear particularly important when applied to the financial industry and healthcare industry for services powered by AI, including services like determining loan eligibility or diagnosing patients.

That's why companies should establish processes to track data movement and protect their sensitive assets, without hindering innovation. This is why both broad and fine-grained access controls are important. 

As these systems proliferate, LLMs must be continually monitored to verify answers are accurate. Companies need to know the right datasets are matched to the right end systems, and that the information is timely and of high quality, and it’s under-appreciated just how difficult this is to do. Some of the source systems could be from the 1980s. Businesses also have immense amounts of software systems that keep data siloed.

Building the internal capability to track information from source to end use cases is not a trivial undertaking. But for those that do it, they’re able to truly democratize data and AI, and unlock powerful new use cases across the business. 

Picking the first GenAI use cases  

First impressions matter. In many cases, the board of directors and CEO are pressuring the executive team to get started with AI. But while there’s enthusiasm to get going, business leaders also want to ensure they’re not throwing precious company resources at bad AI. 

Success in the initial pilot cases helps secure continued investment. It’s why companies must take the time to really think through what they hope to achieve with AI. A goal to drive higher sales growth might require a totally different set of technologies than one to increase margins. 

We refer to this as setting the data and AI “North Star.” And just like wandering travelers have long relied on Polaris for guidance in the twilight of the night, a company’s own “North Star” will prove vital in keeping its data and AI efforts on-track. 

But when settling on the first few projects, companies should also be realistic. At any given moment, there are likely hundreds of potential ways AI can drive value that all vary in terms of importance and feasibility. The early use cases might not be the biggest value-generators for the business. That’s not the point.

Instead, companies should use these nascent projects to identify pain points and start developing a consistent approach to identifying, evaluating, prioritizing and implementing future use cases.

Often, the so-called “low-hanging fruit” is the best way for internal advocates to quickly prove the capabilities of AI to skeptics. These are usually smaller undertakings intended to help assist employees with time-consuming, but monotonous tasks, like quickly summarizing research information across contracts, legal documents, market research and other sources. 

Once the company proves adept at these easier projects, it can more confidently pursue advanced use cases, like building or fine-tuning their own model. The more initiatives a business undertakes, the more efficient the process becomes for vetting new investments. 

Building, buying or customizing GenAI

Often, teams want to build their own customized tools instead of buying one off-the-shelf. While this gives IT departments immense control over their technology environments, it can also eat up valuable development time and require a larger financial investment. 

There’s one question that businesses need to ask themselves when faced with the “build vs. buy” dilemma: Will it drive a competitive differentiator? 

Often, as businesses do their research, they find that many other companies share the same problem or are working towards a similar outcome, so often, there are already well-established software applications to help. 

For example, with the growing power of large, foundation models, few organizations are seeking to build their own general-purpose AI systems. Instead, they’re much more interested in using their own data to create bespoke solutions that actually understand the business and can produce hyper-relevant results (you can even read an example of how Databricks did this).  

The ability to augment commercial models is how companies can combine their desire for customized software with the ease of buying an off-the-shelf tool. This is similar to how businesses use open source today to help accelerate application development. On platforms like the DI Platform, companies are able to easily use proprietary data to make open source foundational LLMs more performative for their specific needs. 

Tracking GenAI in the real-world

Most importantly, as companies begin to let AI systems loose in the real world, they need a way to monitor how the models are performing. This is vital for ensuring that GenAI applications are always producing accurate and timely outputs. 

As more models go into production, it will be important for enterprises to be able to detect drift through one interface. But it’s also critical for organizations to track performance to guarantee the systems are creating the intended value for the business. 

For example, many businesses want to become data-driven but struggle to track progress towards that goal. By monitoring GenAI systems, enterprises can monitor metrics like the number of data sources contributing to the outcomes or the overall volume of data the models are analyzing. 

And beyond the technology itself, businesses should be actively monitoring the impact to the workforce. Employee surveys can indicate whether workers are spending less time on things like manual data entry. 

And aligning usage to KPIs can encourage broader usage of the tools. This could be something as basic as requiring employees to run a certain number of queries every week. Then, managers can work with low adopters to figure out potential roadblocks.  

Building the GenAI strategy 

Ultimately, a company's success in becoming a data and AI leader will come down to how they manage their people, processes and technology. Fail in one area and the rest will crumble. But with the right strategy and partners, businesses can take steps to fortify all three of the pillars simultaneously, allowing them to move with the speed every business wants. 


To learn more about the formula for solving the challenges of overhauling your processes, people and technology, check out our recent eBook, “Accelerate Your Data and AI Transformation.” 

Try Databricks for free

Related posts

Data + AI Strategy: People Focus

This post is part of a series. Check out Part 1: The Data + AI Trifecta: People, Process, and Platform In the current...

Data + AI Strategy: Platform Focus

The secret to good AI is great data. As AI adoption soars, the data platform is the most important component of any enterprise's...

Executive Overview: The Rise of Open Foundational Models

May 3, 2024 by Josh Howard in
Moving generative AI applications from the proof of concept stage into production requires control, reliability and data governance. Organizations are turning to open...
See all Platform Blog posts