Skip to main content

Bringing Scalable AI to the Edge with Databricks and Azure DevOps

Andres Urrutia
Howard Wu
Nicole Lu
Bala Amavasai
Share this post

Four Forces Driving Intelligent Manufacturing

 

The opportunity for machine learning and AI in manufacturing is immense. From better alignment of production with consumer demand to improved process control, yield prediction, defect detection, predictive maintenance, shipping optimization, and much, much more, ML/AI is poised to transform how manufacturers run their businesses, making these technologies a principal area of focus in Industry 4.0 initiatives.

But realizing this potential is not without its challenges. While there are many, many ways for organizations to miss the mark when it comes to machine learning and AI, one critical way is not anticipating how a model will be integrated into operational processes.

Edge Devices Are Essential for Shop Floor Predictions

In manufacturing, many of the opportunities for immediately exploiting ML/AI are found in and around the shop floor. Signals sent from machines, instruments, devices and sensors are sent to models which convert this information into predictions that help govern operations or trigger alerts. The ability of these models to steer these activities depends on reliable, secure and low-latency connectivity between the shop floor devices and the model, something that cannot always be guaranteed when the two are separated over large scale networks.

But large scale networks are what are needed to collect the volumes of data required to train a model. During model training, algorithms are repeatedly introduced to the various values collected from these real or simulated devices so that they may hone in on relevant patterns in the data. This is a computationally intensive and time-consuming process, something that’s best managed in the cloud.

While training requires access to such resources, the resulting model is often surprisingly lightweight. Information learned from Terabytes of data may be condensed into Megabytes or even Kilobytes of metadata within a trained model. This means that a typical model previously trained in the cloud can generate fast, accurate predictions with the support of a very modest computational device. This opens up the opportunity for us to deploy the model on or near to the shop floor on a lightweight device. Such an edge deployment reduces our dependency on a large, high speed network to support integration scenarios. (Figure 1)

Figure 1. Relationship between cloud and edge device for model training and edge deployment
Figure 1. Relationship between cloud and edge device for model training and edge deployment

Databricks and Azure DevOps Enable Edge Device Deployment

Putting all the parts together, we need an initial collection of data, pushed to the cloud, and a platform on which to train a model. As a unified, cloud-based analytics data platform, Databricks provides an environment within which a wide range of ML/AI models can be trained in a fast and scalable manner.

We then need to deploy the trained model to a container image. This container needs to host not just the model and all its dependencies but present the model in a manner that’s accessible to a wide variety of external applications. Today, REST APIs are the gold standard for presenting applications in such a manner. And while the mechanics of defining a robust REST API can be a bit daunting, the good news is that Databricks, through its integration of the MLflow model management and deployment platform, enables you to do this with a single line of code.

With the container defined, it now must be deployed to an edge device. A typical device will provide a 64-bit processor, a few Gigabytes of RAM, WiFi or ethernet connectivity, and will run a lightweight operating system such as Linux which is capable of running a container image. Tools such as Azure DevOps, built to manage the deployment of containers to a wide range of devices, can then be used to push the model container from the cloud to the device and launch the local predictive service on it.

To help our customers explore this process of model training, packaging and edge deployment, we’ve built a Solution Accelerator with pre-built resources that document in detail each step of the journey from Databricks to the edge. It’s our hope that these assets will help demystify the edge deployment process, allowing organizations to focus on how predictive capabilities can improve shop floor operations, not the mechanics of the deployment process.

Explore the solution accelerator here and contact us to discuss your Edge deployments.

Try Databricks for free

Related posts

See all Industries posts