Skip to main content

Driving a Large Language Model Revolution in Customer Service and Support

Share this post

Want to build your own LLM-enabled bot? Download our end-to-end solution accelerator here.

Download notebook

Business leaders are universally excited for the potential of large language models (LLMs) such as OpenAI’s ChatGPT, Google’s Bard and now MosaicML’s MPT. Their ability to digest large volumes of text and generate new content based on this information has the potential to transform a wide range of existing business processes, improving the speed, scale and consistency of many tasks that had previously been exclusively the domain of human specialists.

Nowhere is this excitement more acutely felt than in areas dependent on agent-led service and support. Prior to the emergence of these technologies, organizations were dependent upon large numbers of individuals, trained and well-versed in the large bodies of documents that made up various corporate policies and procedures. Agent representatives had to not only be capable of responding in accordance with the rules, they occasionally needed the ability to interpret the rules to respond to edge cases not explicitly addressed in the documentation.

Getting agents up to speed is often a time-consuming experience.  Combined with high turnover rates, staffing the various call centers and support desks behind these processes has long been a recognized challenge.  Attempts at offloading requests to online knowledge bases, interactive voice response systems (IVRs) and prior generations of chat-bots often left the consumers of these services frustrated and underserved.  

But pre-trained LLMs combined with knowledge extracted from the same documents used to train human-agents can be instantly brought up to speed and deployed at a scale perfectly aligned with consumer demand. These models never tire, never have a bad day and never quit.  And while in these early days of the technology we would not recommend simply turning over service and support interactions directly to an LLM-based agent, these capabilities are today perfectly capable of augmenting a human-agent, providing guidance and support that improves the speed, efficiency, consistency and accuracy of their work while reducing the time to ramp-up.  In short, LLMs are poised to revolutionize how businesses deliver support services.

Authoritative Responses Require Enterprise-Specific Content

While much of the attention in the conversations surrounding this topic center on the large language models themselves, the reality is that the quality of the output they generate is dependent on the content they consume.  Most models are initially fed large volumes of general information which makes them very capable for delivering well-crafted, often strikingly human responses to user prompts and questions. But if an LLM is to generate a response tailored to the specific policies and procedures employed by a particular company, it must be presented with those details and tasked with responding within the particular context formed by those documents.

The strategy employed by most LLM-based agents (bots) designed for this work is to provide a general set of instructions that task the model with producing friendly, helpful and professional responses to a user-originated question given the context provided by what has previously been determined to be a relevant document. This three-part approach to response generation, one that combines a system instruction with a user question and relevant documentation, allows the bot to synthesize a response that is more consistent with the expectations of the organization.

Content Management Is The Most Pressing Challenge

The challenge then becomes, how best to identify the documents relevant to a given question? While much of the technical conversation on this topic tends to veer towards strategies for converting documents into numerical vectors (embeddings) and performing high-speed similarity searches, the primary challenge is much more organizational in nature.

For years, IT professionals have assembled large repositories of data to support a variety of analytic functions, but these efforts have largely focused on the highly structured data collected through various operational systems. While champions of more expansive approaches to data management and analytics have rightfully called out that 80 to 90% of an organization's information resides in messages, documents, audio and video files - what we frequently refer to as unstructured data - the inability of those individuals to articulate a compelling analytic vision for those data meant that unstructured data were largely excluded from any kind of centralized data management.

Today, the unstructured data assets that we need to provide the context for authoritative response generation using LLMs are scattered across the enterprise. So while organizations discuss how best to leverage these generative technologies, they need to aggressively begin defining strategies for centralizing the management of the relevant unstructured data assets so that once a solution receives the green-light, the organization is in a position to act.

Databricks Is the Ideal Solution for Both Structured and Unstructured Data

At Databricks, we’ve long advocated for a more expansive view of analytics and data management.  Through our unified lakehouse platform, we focus on providing organizations consistent, scalable and cost-effective ways to leverage all of their information assets, whether structured or unstructured. This goes beyond just data collection and processing to include rich capabilities for both security, governance and collaboration. We believe Databricks is the ideal platform for building your foundation for LLM success.

And our capabilities go well beyond just data management.  Databricks has a long history of embracing machine learning and AI in addition to more traditional business analytics.  Just as we provide a unified platform for the management of the full spectrum of your data, we provide business analysts, data scientists and application developers with a powerful platform for extracting the fullest potential of the data it houses.

The key to our ability to support this breadth of capabilities is our embrace of open source innovation.  Databricks is built from the ground up as an open platform that allows organizations to rapidly pivot their analytics work to take advantage of the latest and greatest capabilities emerging from the open source community while retaining a well-managed and well-governed data foundation.  And nowhere is this embrace of open source going to be more impactful than in the space occupied by LLMs.

Embracing Open Source Is Critical

While there is much excitement these days around proprietary LLM innovations, we and many others recognize the rapid ascension of the open source community in this space. In a recently leaked memo, a senior Google employee laments that “We have no moat, but neither does OpenAI.” While innovations from OpenAI, Google and others have absorbed much of the early spotlight centered on this space, the reality is that the open source community has already demonstrated their ability to quickly catch up and solve many of the nagging problems that have blocked many mainstream businesses from adopting these technologies. So while today, we recognize the innovation delivered by these closed source solutions, we believe it is critical that organizations retain the flexibility to switch course over the coming years by avoiding vendor lock-in.

Already, new standards for the development of LLM-based applications have emerged and Databricks has integrated support for these within its platform.  Additional enhancements will continue to make their way forward to ensure that as the LLM community veers left and then right, enterprises can continue to easily connect their information assets with these technologies.

This is being driven not by a passive observation of where the LLM community is headed but by active engagement in the dialog, including moments within which we directly challenge the thinking of the perceived leaders in this space.  We are actively involved in the development of numerous LLM-based solutions internally and with our customers. And whenever possible, we will continue to release free, publicly available code demonstrating exactly how LLM-based solutions can be built.

Build Your First LLM-Based Chat Bot Now

With that in mind, how about we show you exactly how to build a context-enabled LLM-based chat bot solution like the one outlined above.  Using content taken from our own knowledge base (made publicly available so that users can recreate our work), we have built a solution capable of addressing customer support questions leveraging LLM technologies. The step-by-step code behind this work including data preparation, agent development and deployment to a microservice that allows you to integrate the agent into any number of applications - we integrated our internal build of this solution into Slack - is provided with sufficient comments and documentation to help your organization understand the solution and get started with their own.

We invite you to download these assets here and to reach out to your Databricks representative to discuss how LLMs can best be integrated into your business processes.

Download notebook

Try Databricks for free

Related posts

Getting started with NLP using Hugging Face transformers pipelines

February 6, 2023 by Paul Ogilvie and Maddie Dawson in
Advances in Natural Language Processing (NLP) have unlocked unprecedented opportunities for businesses to get value out of their text data. Natural Language Processing...

Fine-Tuning Large Language Models with Hugging Face and DeepSpeed

March 20, 2023 by Sean Owen in
Large language models (LLMs) are currently in the spotlight following the sensational release of ChatGPT. Many are wondering how to take advantage of...

Free Dolly: Introducing the World's First Truly Open Instruction-Tuned LLM

Two weeks ago, we released Dolly , a large language model (LLM) trained for less than $30 to exhibit ChatGPT-like human interactivity (aka...
See all AI posts