Optum helps hospitals accurately calculate insurance claims reimbursements by detecting underpayment from insurance companies. With millions of claims per day and a requirement to return results to the hospitals for revenue recovery purposes within 8 hours, Optum’s legacy Oracle-based system struggled to process and analyze the data fast enough. With Azure Databricks, they have been able to improve data pipeline performance by 2x which has enabled faster delivery of results hospitals saving them millions of dollars in potentially lost revenue.
Within the healthcare industry, processing healthcare claims is a critical process that must be dealt with accurately and swiftly. However, with millions of claims submissions per day, hospitals struggle to reconcile claims and payments, putting them at significant risk of losing millions in revenue. At Optum, their goal is to leverage data and analytics to make claims management processing faster, better, and with fewer errors.
However, with massive volumes of claims data coming in on a daily basis, Optum’s legacy Oracle-based system struggled to scale performance to meet their 8 hour SLA. “Incorrect or underpaid reimbursements can cost hospitals millions of dollars in revenue,” explained Mohammed Salim Sayed, Principal Data Engineer at Optum. “Speed and efficiency is paramount. And we struggled to achieve both with our legacy system.”
Optum’s legacy analytics system was highly complex and rigid, requiring significant DevOps support to maintain and manage the system. And as data scale increased, the complexity also increased, impacting performance and reliability of their pipelines.
Another challenge their system created was around cost management. Not only was it operationally expensive to manage, but it was not able to independently scale compute from storage — resulting in exorbitant compute costs. They realized that they needed to move off of their legacy system onto an Apache Spark based system in order to take advantage of the scale of distributed computing and power of Spark.
Databricks provides Optum with a unified data analytics platform on Azure that has greatly simplified infrastructure management and data engineering. Cluster management is fully automated which allows them to easily provision compute clusters while optimizing costs based on consumption.
They are able to easily access and ingest a variety of data including medical records, claims contracts, and reimbursement receipts without having to jump through hoops. With the power of Databricks and the scalability of Azure, they are now able to easily process millions of claims per day without taking a hit on performance. In addition, Delta Lake has enabled them to blend batch and streaming pipelines, without sacrificing data reliability.
“Databricks has added another level of performance to our data pipelines,” said Sayed. “Now we can easily process batch and data streams, and quickly feed data to our analysts for BI and our data science team for model development and training.”
The performance impact of Azure Databricks and Delta Lake was truly noticed as they scaled their data. In a direct comparison, they are now able to process 80 million claims in only 86 minutes (1 million claims/minute) compared to their legacy system which maxed out at 80 million claims in 160 minutes (500k claims/minute).
“What we noticed was that Databricks was already faster and more stable,” explained Sayed. “We can now handle the same volume of claims in half the time.”
With Azure Databricks they are now able to process data faster, train models more efficiently, and ultimately deliver results to hospitals faster, saving them millions of dollars in potentially lost revenue.