Webinar

AWS | Databricks ML Dev Day Workshop

Unifying data pipelines and Machine Learning with Delta Lake and Amazon SageMaker

On-demand

In this workshop, we’ll cover the best practices for enterprises to use with powerful open source technologies to simplify and scale your data and ML efforts. We’ll discuss how to leverage Apache Spark™ — the de facto data processing and analytics engine for data preparation that unifies data at a massive scale across various sources — and Delta Lake so you can make your data lake ML-ready. You’ll also learn how to use data and ML frameworks, such as TensorFlow, XGBoost and scikit-learn, to train models based on different requirements. And finally, you can learn how to use MLflow to track experiment runs between multiple users within a reproducible environment and manage the deployment of models to production on Amazon SageMaker.

  • Learn how to build highly scalable and reliable pipelines for analytics
  • Gain deeper insight into Apache Spark and Databricks, including the latest updates with Delta Lake
  • Train a model against data and learn best practices for working with ML frameworks (TensorFlow, XGBoost, scikit-learn, etc.)
  • Find out how MLflow can help you track experiments, share projects and deploy models in the cloud with Amazon SageMaker
  • Network and learn from your ML and Apache Spark peers