On-Demand Webinar and FAQ: How to Evaluate Cloud-based Apache Spark Platforms

Last week, we held a live webinar, How to Evaluate Cloud-based Apache Spark Platforms, to help those who are currently evaluating various platforms understand the key decision criteria in choosing a Spark platform and how to evaluate each vendor’s platform against those requirements.

This webinar, featuring ESG senior analyst Nik Rouda, provided practical best practices and tools to evaluate and compare the most popular cloud-based Spark solutions such as how to set up a successful proof of concept (PoC) and estimate the total cost of ownership (TCO) and return on investment (ROI) of each solution.

The webinar is now accessible on-demand, and the buyer’s guide referenced in the webinar along with the slides are also downloadable as attachments to the webinar.

We have also answered the common questions raised by webinar viewers below. If you have additional questions, check out the Databricks Forum or the new documentation resource.

If you’d like free access to Databricks, you can sign up for a free 14-day trial today.

Common webinar questions and answers

Click on the question to see answer

What is the trend for adoption of Python vs. JVM centric (Java/Scala) as a means to plan for development team skills, preferences and hiring?

Doesn’t Spark need to integrate with a file management system, such as HDFS? So comparing Spark Vs. Hadoop seems strange, as Spark as I know it runs on HDFS.

How do you reuse clusters and auto optimize cluster resources in Databricks?

What do you offer to help manage streaming workflows?

How well does Databricks work with MS SQL 2016 or Oracle 12c?

Does Databricks provide AWS SQS integration? For example, if I run a job (notebook) can I send SQS messages to update my solution with job status?

Try Databricks for free Get started

Sign up