Today, we’re excited to announce MLflow v0.4.0, MLflow v0.4.1, and v0.4.2 which we released within the last week with some of the recently requested features. MLflow 0.4.2 is already available on PyPI and docs are updated. If you do
pip install mlflow as described in the MLflow quickstart guide, you will get the recent release.
In this post, we’ll describe the new features and fixes in this release.
Azure Blob Storage Artifact Support
As part of MLflow 0.4.0, we’ve added support for storing artifacts in Azure Blob Storage, through the
--default-artifact-root parameter to the
mlflow server command. This makes it easy to run MLflow training jobs on multiple Azure cloud VMs and track results across them. The following example shows how to launch the tracking server with an Azure Blob Storage artifact store. You will need to set the
AZURE_STORAGE_CONNECTION_STRING environment variable as noted in MLflow Tracking > Azure Blob Storage.
mlflow server --default-artifact-root wasbs://$container@$account.blob.core.windows.net/
Using MLflow with PyTorch and Tensorboard
We’ve added some samples that include advanced tracking, including a PyTorch TensorBoard Sample with the following MLflow UI and TensorBoard output.
Thanks to PR 170, MLflow now includes support for H2O model export and serving.
Other Features and Bug Fixes
In addition to these features, other items, bugs and documentation fixes are included in these releases. Some items worthy of note are:
- MLflow experiments REST API and mlflow experiments create now support providing
- [UI] Show GitHub links in the UI for projects run from http(s):// GitHub URLs (Issue #235)
- Fix Spark model support when saving/loading models to/from distributed filesystems (Issue #180)
- [Tracking] GCS artifact storage is now a pluggable dependency (no longer installed by default). To enable GCS support, install google-cloud-storage on both the client and tracking server via pip. (Issue #202)
- [Projects] Support for running projects in subdirectories of Git repos (Issue #153)
- [SageMaker] Support for specifying a compute specification when deploying to SageMaker (Issue #185)
The full list of changes and contributions from the community can be found in the 0.4.2 Changelog. We welcome more input on email@example.com or by filing issues or submitting patches on GitHub. For real-time questions about MLflow, we’ve also recently created a Slack channel for MLflow as well as you can follow @MLflow on Twitter.
MLflow 0.4.2 includes patches from Aaron Davidson, Andrew Chen, Arinto Murdopom, Corey Zumar, Javier Luraschi, Joel Akeret, Juntai Zheng, Matei Zaharia, Siddharth Murching, Stephanie Bodoff, Tomas Nykodym, Toon Baeyen