Skip to main content
<
Page 17
>

Onboarding your new AI/BI Genie

August 19, 2024 by Chao Cai and Richard Tomlinson in
At Databricks, we want to make data and AI accessible to everyone on the planet. This is why we're building solutions like AI/BI...

Announcing the PyCharm Integration with Databricks

We are excited to announce the latest addition to the Databricks developer experience: the PyCharm Professional Integration with Databricks ! This new plugin...

Beyond the Leaderboard: Unpacking Function Calling Evaluation

1. Introduction The research and engineering community at large have been continuously iterating upon Large Language Models (LLMs) in order to make them...

Building a robust data stewardship tool in life sciences

This blog was written in collaboration with Gordon Strodel, Director, Data Strategy & Analytics Capability, in addition to Abhinav Batra, Associate Principal, Enterprise...

An Introduction to Time Series Forecasting with Generative AI

An Introduction to Time Series Forecasting with Generative AI Time series forecasting has been a cornerstone of enterprise resource planning for decades. Predictions...

Unlock Faster Machine Learning with Graviton

We are excited to announce that Graviton , the ARM-based CPU instance offered by AWS, is now supported on the Databricks ML Runtime...

Databricks University Alliance Crosses 1,000 University Threshold

August 14, 2024 by Rob Reed in
Databricks is thrilled to share that our University Alliance has welcomed its one-thousandth-member school! This milestone is a testament to our mission to...

Announcing the Generative AI World Cup: A Global Hackathon by Databricks

Welcome to the Generative AI World Cup , a global hackathon inviting participants to develop innovative Generative AI applications that solve real-world problems...

Databricks SQL Serverless is now available on Google Cloud Platform

Today, we are thrilled to announce that Databricks SQL Serverless is now Generally Available on Google Cloud Platform (GCP)! As a key component...

Long Context RAG Performance of LLMs

Retrieval Augmented Generation (RAG) is the most widely adopted generative AI use case among our customers. RAG enhances the accuracy of LLMs by...