Skip to main content

Announcing the winners of the first Databricks Asia-Pacific LLM Cup!

Ed Lenta
June Tan
Jacintha Ng
Brian Law

in

Share this post

We're excited to announce the winners of Databricks' inaugural Asia-Pacific Large Language Model (LLM) Cup - in partnership with AWS, a first-of-its-kind competition in the region, which garnered participation from over 1,000 data and AI practitioners across more than 10 countries.

From October to December 2023, participants were invited to build LLM-powered applications using Databricks to solve real-world business problems. To help set up participants for success, prior to the hackathon, we provided self-paced learning workshops on building LLMs and dedicated training led by Databricks' solution architects.

Entries were evaluated based on creativity, business applicability, relevance, thoroughness and the scalability of the LLM architecture. While there were many of impactful projects, two teams stood out with their innovative ideas addressing critical organisational challenges in telecommunications and cybersecurity.

Congratulations to the winning teams whose innovations prove just how quickly teams can transform businesses with the right technology!

Optus unlocks value from call centre customer conversations with LLM, offering 360-degree view of its customers

Securing the grand prize, Contact Catalyst from Optus, an Australian telecommunications company, addressed a prevailing business challenge: understanding customers' needs during calls. Call centre agents typically take manual notes, leading to unreliable interpretations, often resulting in customer frustration and dissatisfaction.

Using synthetic data sets, the team analysed data in an automated and secure fashion, and navigated the complexities of managing vast volumes of text and voice data from customer interactions and gained deep, real-time insights into customers' needs, significantly enhancing the overall user experience.

Its LLM can efficiently classify the intent of the customer call, provide real-time detection of new topics, content summarisation sentiment analysis and detect aggression, alerting the business to address the customer's concerns quickly and politely.

Ensign InfoSecurity uncovers LLM vulnerabilities to fortify cybersecurity beyond conventional measures

The runner-up, Red Teaming Your LLM by Ensign InfoSecurity ("Ensign"), Asia's largest pure-play end-to-end cybersecurity service provider, demonstrated their solution to counter the cyber vulnerabilities with growing deployment of LLM-based applications.

With greater accessibility to LLM APIs, more companies are building their own LLM-based applications. Yet, the rapid deployment of these models has outpaced the development of comprehensive security protocols, leaving many applications vulnerable to high-risks cyberattacks. The Ensign team devised a "red teaming approach" to train LLMs with proprietary jailbreak prompts. Consequently, the LLMs proactively identified and addressed vulnerabilities in LLM defences, enhancing overall cybersecurity.

We are thrilled to see so much interest and innovation coming from the data and AI community across APJ. It's inspiring to see teams use Databricks to develop LLMs that tackle their unique challenges – really getting to the heart of what they need to better serve their customers and revolutionise their business operations.

Both teams presented their winning entries at Databricks' APJ Data + AI World Tour on Thursday, 18 January 2024 (recording available on-demand).

Visit the project gallery to learn more about their submissions: https://databricks-llm-cup-2023.devpost.com/project-gallery.

We'd also like to express our gratitude to AWS for their partnership in the LLM Cup - playing a crucial role in the event’s success.

Try Databricks for free

Related posts

Data Intelligence Platforms

The observation that " software is eating the world " has shaped the modern tech industry. Today, software is ubiquitous in our lives...

Big Book of MLOps Updated for Generative AI

Last year, we published the Big Book of MLOps, outlining guiding principles, design considerations, and reference architectures for Machine Learning Operations (MLOps). Since...

Build GenAI Apps Faster with New Foundation Model Capabilities

Following the announcements we made last week about Retrieval Augmented Generation (RAG) , we're excited to announce major updates to Model Serving. Databricks...
See all Company Blog posts