Skip to main content

Serverless Computing

Serverless computing is the latest evolution of the compute infrastructure. Organizations used to need physical servers to run web applications. Then the rise of cloud computing enabled them to create virtual servers — although they still had to take the time and effort to manage them. Now, under the serverless computing model, a cloud service provider takes responsibility for infrastructure management tasks, while enterprise developers can focus on creating and deploying applications. 

Serverless computing can help organizations accelerate development, reduce operational overhead and focus on business logic rather than infrastructure management. It enables security, faster product delivery and better resource optimization, while creating more opportunity for innovation. 

What is serverless computing?

Serverless computing is an application development model that allows developers to build, deploy and run applications without managing servers or back-end infrastructure. “Serverless” doesn’t mean servers aren’t used, but that they are fully managed by a cloud service provider or vendor, so developers don’t need to interact with them. The provider handles provisioning the cloud infrastructure required to run the code, scale infrastructure as needed and other infrastructure tasks. This allows developers to focus solely on writing code, integrating applications and managing data, while working with efficient, scalable, fully managed infrastructure.  

Here’s more to explore

Big Book of Data Engineering

Fast-track your expertise with this essential guide for the AI era.

Read now

Big Book of Data Warehousing and BI

Your complete how-to guide to data warehousing with the Data Intelligence Platform — including real-world use cases.

Read now

Explore the Data Intelligence Platform

Accelerate ETL, data warehousing, BI and AI.

Read now

What challenges does serverless computing help solve?

Serverless computing helps organizations solve several challenges caused by conventional server compute models, including: 

  • Time-consuming management: Configuring and managing compute — especially selecting the correct compute type for a specific workload — has long been a challenge for enterprises, taking up time that could be spent on core business goals. 

  • Costly idle time: Conventional server compute creates extra costs as enterprises pay not only for the time it takes to process workloads, but also for processes such as compute initialization and scale-up, known as idle time.  

  • Inefficiency: Manual infrastructure management is often inefficient. For example, teams using manual methods may overprovision compute and end up paying for capacity they don’t need or use. 

  • Slow processing: Compute is not always ready when needed, resulting in situations such as teams having to wait for infrastructure resources to become available to run queries or experiencing slow compute startup. 

  • Inability to scale: Without advanced autoscaling, infrastructure capacity doesn’t match up with demand. Developers can’t just focus on writing code — they also need to consider usage. 

  • Service disruption: Unplanned downtime often disrupts service, and systems must be architected for high reliability and fallback when servers fail. 

Advantages of serverless

The serverless model offers organizations several advantages. Compared with conventional server compute models, serverless is: 

  • Maintenance-free: Teams no longer have to worry about capacity management, performance optimization, patching, upgrading or other infrastructure administration. Developers can focus on writing code to solve customer problems. 
  • Cost-effective: Serverless compute models enable organizations to pay only for the time and resources, such as CPU and memory, that they use for processing workloads.  
  • Efficient: Serverless systems automatically adjust infrastructure according to demand, ensuring that resources are there when needed, but teams aren’t overprovisioning or paying for unused resources.  
  • Scalable and available: Serverless compute provides an elastic environment that’s instantly available and automatically scales without manual intervention.  
  • Reliable: Serverless compute helps mitigate downtime at the cloud service provider level, utilizing server pools with automatic recovery. 

Serverless empowers organizations to focus on high-value work such as responding to customer feedback and quickly releasing code changes rather than routine infrastructure. This enables companies to bring solutions to market faster and maintain a competitive edge. 

Serverless architecture

In serverless architecture, a serverless platform monitors the cloud resources a workload needs to run and allocates as much as it needs, then scales the infrastructure back down when demand decreases. This makes it easier to scale, update and independently deploy separate components of a system and enables developers to deploy back-end code within the cloud provider infrastructure without having to manage or maintain the infrastructure. 

Serverless compute on Databricks

Databricks is a fully serverless-enabled platform, offering serverless compute for extract, transform, load (ETL) workloads including Jobs, Notebooks and Delta Live Tables (DLT), as well as Databricks SQL and Databricks Model Serving on AWS and Azure.  

With serverless compute on the Databricks Data Intelligence Platform, Databricks provides rapid workload startup, automatic infrastructure scaling, optimized performance and seamless version upgrades of Databricks Runtime. The benefits of serverless compute on Databricks include:

  • Performance: Serverless compute on Databricks provides superior performance that is fast, simple and reliable. With instant and elastic compute, you’ll no longer have to wait for infrastructure resources to become available to run queries or overprovisioning resources to handle spikes in usage. Databricks serverless compute for DLT pipelines eliminates cold starts, improves throughput and provides more efficient transformation. SQL Serverless offers performance features that include improved query results caching and predictive I/O to make point lookups faster and cheaper and vastly increase speed for data updates and deletes.  
  • Cost savings: The serverless compute platform uses machine learning algorithms to provision and scale compute resources right when teams need them. This enables substantial cost savings without the need to shut down clusters manually. A simplified pricing model means there’s only one bill to track and only one place to check costs. Efficiency improvements enable most customers to achieve a 25% or greater reduction in their serverless compute spend. 
  • No management overhead: Serverless provides a fully managed service, eliminating the burden of capacity management, patching, upgrading and performance optimization of the cluster. Teams only need to focus on data and the insights it holds.   
  • Speed: Compute starts up in seconds, not minutes. Databricks runs “warm pools” of instances so that compute is ready when teams are.  
  • Reliability: Databricks’ serverless compute shields customers from cloud outages with automatic instance type failover and a “warm pool” of instances buffering from availability shortages.  
  • Scalability: Databricks intelligent serverless autoscaler automatically scales workspace capacity in graduated steps to meet demand, ensuring that workspaces will always have the right amount of capacity provisioned.

Serverless compute on Databricks offers fast, simple and reliable service, enabling organizations to move at the speed of business and focus on delivering value rather than managing infrastructure.  

    Back to Glossary