Every enterprise today wants to accelerate innovation by building AI into their business. However, most companies struggle with preparing large datasets for analytics, managing the proliferation of ML frameworks, and moving models in development to production.
AWS and Databricks are presenting a series of Dev Day events where we will cover best practices for enterprises to use powerful open source technologies to simplify and scale your ML efforts. We’ll discuss how to leverage Apache Spark™, the de-facto data processing and analytics engine in enterprises today, for data preparation as it unifies data at massive scale across various sources. You’ll also learn how to use ML frameworks (i.e. Tensorflow, XGBoost, Scikit-Learn, etc.) to train models based on different requirements. And finally, you can learn how to use MLflow to track experiment runs between multiple users within a reproducible environment, and manage the deployment of models to production on Amazon SageMaker.
Join us at the half-day workshop near you to learn how unified analytics can bring data science and engineering together to accelerate your ML efforts. This free workshop will give you the opportunity to:
- Learn how to build highly scalable and reliable pipelines for analytics
- Get deeper insights into Apache Spark and Databricks, and managing data using Delta Lakes.
- Train a model against data and learn best practices for working with ML frameworks (i.e. - XGBoost, Scikit-Learn, etc.)
- Learn about MLflow to track experiments, share projects and deploy models in the cloud with Amazon SageMaker
- Network and learn from your ML and Apache Spark peers
Join us in these cities:
Austin, TX | McLean, VA | Dallas, TX | Atlanta, GA | Cambridge, MA | Palo Alto, CA
Santa Monica, CA | Chicago, IL | Toronto, ON | New York, NY