Databricks Pricing

One simple platform to unify all your data, analytics and AI workloads — on your preferred cloud

背景

How does Databricks pricing work?

Pay as you go

Databricks offers you a pay-as-you-go approach with no up-front costs. Only pay for the compute resources you use at per second granularity. Save up to 90% with unused compute capacity through Spot instances.

Save more with committed-use discounts

Databricks helps you lower your costs with discounts when you commit to certain levels of usage. The larger your usage commitment, the greater your discount compared to on-demand usage. You can use spending commitments across multiple clouds.

背景

FAQ

The 14-day free trial gives you:

  • A collaborative environment for data teams to build solutions together
  • Interactive notebooks to use Apache SparkTM, SQL, Python, Scala, Delta Lake, MLflow, TensorFlow, Keras, scikit-learn and more

Please note that you will still be charged by your cloud provider for resources (e.g. compute instances) used within your account during the free trial.

無料お試し期間が終了すると、自動的に同プランの有料サブスクリプションに移行します。ただし、サブスクリプションはいつでも解約できます。

Databricks pricing is based on your compute usage. Storage, networking and related costs will vary depending on the services you choose and your cloud service provider.

A Databricks Unit (DBU) is a normalized unit of processing power on the Databricks Lakehouse Platform used for measurement and pricing purposes. The number of DBUs a workload consumes is driven by processing metrics, which may include the compute resources used and the amount of data processed. See DBU pricing details for AWS, Azure and Google Cloud.

Databricks prices and Spot discounts may vary based on geographic region and cloud service provider. For details, please see the Databricks pricing pages for AWS, Azure and Google Cloud.

When estimating your savings with Databricks, it is important to consider key aspects of alternative solutions, including job completion rate, duration and the manual effort required to support a job. To help you accurately estimate your savings, we recommend comparing side-by-side results as part of a proof of concept deployment. See the speedup achieved by this customer and the results of this customer benchmark. Contact us to get started.

We generally recommend that you configure compute resources to automatically shut down when jobs complete. When you run a job on a new job cluster (which is usually recommended), the cluster terminates automatically when the job is complete, and the workload consumes Jobs Compute DBUs. On the other hand, if you schedule a job to run on an existing all-purpose cluster, that cluster will auto-start and you can configure the clusters to automatically terminate after a specified idle time (e.g., 120 minutes), and the workload consumes All-Purpose Compute DBUs.

Databricks
ソリューションアクセラレータ

Deliver data analytics and AI value faster

The Databricks Solution Accelerators are designed to help Databricks customers go from idea to proof of concept in less than two weeks. Each Solution Accelerator includes fully functional notebooks that tackle the most common and high-impact use cases our customers face.

詳しく見る →