Embedding Insight through Prediction Driven Logistics

Download Slides

Aggreko are a leading provider of temporary power and temperature control solutions, serving customers across the globe as they work on projects ranging from the Olympics to aiding humanitarian disaster relief. In this talk, Helena and Andy will discuss how the Insights team have developed scalable machine learning solutions to support the business. In particular they will discuss fuel consumption forecasts that have helped Aggreko’s fuel logistics teams improve customer service levels and reduce costs by becoming more proactive and insight driven.

Speakers: Helena Orihuela and Andy McMahon

Transcript

– So, welcome to our presentation on embedding insight, through prediction driven logistics. I’m Andy McMahon, Analytics Team Leader at Aggreko. And today I’ll be presenting with my colleague, Helena Orihuela, who’s a Senior Data Scientist, also in the analytics team. I’m going to start off today with a brief introduction to Aggreko and what we do, I’ll then explain how we work in extended collaborative teams, to ensure that our data science and machine learning projects deliver value, and answer the right questions across our business. I’ll introduce you to the idea of predictive alarm management at Aggreko. And then after that, I’ll hand over to Helena, who will take you through a particular use case of applying machine learning to fuel tank level forecasting and how that benefits our operations. Helena will finish with a quick demo. And then we’ll summarize and take any questions. So first of all, who are Aggreko? Aggreko is a world leading provider of mobile modular power, temperature control, and energy services. We work at the forefront of a rapidly changing energy market and are focused on solving our customer’s challenges, to provide cost-effective, flexible, and now greener solutions across the globe. We harness innovation, that helps us maintain a global reach, and supply portable equipment for a wide range of use cases. So that could be anything from unique commercial industrial projects, through to utility scale power provision, or even responding to humanitarian emergencies. We bring expertise and equipment to any location globally, from the world’s busiest cities to its most remote places. And our overall mission, which is up on the screen here, you can see has a particular focus on making a positive impact through those solutions, which makes it a really exciting place to do data science. I’ve got some numbers here about Aggreko, just to put in context the scale of that impact I was describing. So, in our power fleet, we have over 10 gigawatts of distributed power available. And that consists of a variety of generators, and associated equipment. These look like the four boxes you can see in the top left hand of the slate here. And they come in a variety of flavors, from diesel gas, next generation biofuels, through to renewable hybrid systems with industrial scale battery storage. In 2019, we took over 1.6 billion pounds in revenue, across 265 locations in a hundred countries. And we currently apply employ around 7,000 people. So we really are global in that reach and impact. But given the nature of this summit, you’ll obviously be asking the question, how do you work with your data? So like any business, at Aggreko, we’re determined to ensure that our data science and machine learning solutions do not become siloed, and make sure that they drive concrete business value. So to make sure this happens, we like to work in what we call an extended data team, where we incorporate all of the relevant people with the relevant knowledge from across multiple functional teams to work on any single given project. So for example, in that extended data team, we’ll have members of advanced analytics. So the team that Helena and I work at, where we are dedicated to understanding our data on a really deep level, and providing a real focus on analysis that provides algorithms, machine learning, and statistical analysis to support decisions. We work alongside our business intelligence colleagues who provide real-time operational reporting on business KPIs so that we get the right data to the right people at the right time. We are both supported by a team from Data Engineering. Our Field Team Engineers, tasked with the ingestion, transformation, and curation of data, from across multiple enterprise systems, and to our enterprise Data Lake. And then finally, but perhaps most importantly, we make sure we’re always working with internal partners from across the business. So this could be experts in technical matters from the US, or our operations experts in Dubai. It doesn’t matter. They’ll be incorporated into the team, provide direction, and ownership of the solution, so that we know that we’re answering the right questions that will drive that business value, and ensure uptake of the final solution that we produce. And as I said, the point of all this is to drive value for the business. So what have we actually achieved with this approach? So if we focus on just the past year, and the contribution of advanced analytics alone to this extended data team, we’ve processed terabytes of data using our Data Lake and associated technologies like Data Bricks. We’ve deployed to production more than 10 machine learning algorithms, which are driving insight on a daily basis, and we’ve generated millions of dollars of insight for the business. So this is the proof that this sort of approach has worked for us. Now, once you have the team, you’re going to want to try and build something. And when we do that, we try and operate in a few key design principles. And there’s one in particular that I think is really relevant to this audience. And that is that when we produce one analysis, we would like it to be flexible, and provide answers to multiple different output channels. So if I take this particular example of generator projects, we’ll get data from those assets stream via IOT solutions. So on both telemetry units, on the generators, through to our Data Lake, which now uses Delta Lake. That data will then be processed through to our production environment, which is based around Data Bricks, where we can run statistical analysis and machine learning models. Then finally and really importantly, we can then surface that result, and that insight, at multiple different channels. So here I’ve highlighted three particularly important ones, but there are many more across Aggreko. So for example, we could surface the results in a power BI dashboard, and allow users to interact, digest, and really query that data and the insight that we’re providing. We could send an Outlook email in order to push alerts, or particular insight to specific people on a very dynamic schedule. And finally, but perhaps most importantly for our generators, we’ll supply information to our alert management system or AMS, which I’d like to spend a few minutes discussing now. The alarm management system, is an interactive web application that was developed by our Application Dev Team, internally at Aggreko. It provides a single pane of glass, where technicians and relevant operations colleagues, can view what we call proactive and predictive alarms for assets. Predictive alarms in this case, are those where we have used machine learning algorithms to forecast a future issue like a part failure, a fuel outage, or a generator completely failing. On the screenshot I’ve taken here, I filtered to only those alarms, which are predictive. And we can see that they’re arranged in an easy to read tabular format. Each alarm provides information on the customer, the services they are paying for, and key information on the asset. We also provide a priority flag to help users effectively triage and prioritize maintenance activities based on predefined criteria. And then finally the solution provides contextual information such as satellite views, notes on the asset, contact information, and detailed history of any previous work carried out on that generator, and on that site. I’m now going to hand over to Helena, who’s going to discuss a particular use case, where we provide insight to all three of these examples I’ve mentioned, a power BI dashboard, an Outlook email, and to our alarm management system.

– Thanks Andy. Let’s continue with the use case of the field level prediction alarm system for external terms, and their use in logistics. I would like to start by explaining the problem. When clients approach to us to rent fuel tanks, they have the option to divert their fuel requirements to us. As part of this service, we agree that the tank will be replenished, and won’t get empty while on hire. By taking this responsibility, we remove from the client, the burden of mandatory consumption, and arranging deliveries. As Andy mentioned before, we are a global company that works in a lot of different environments. So inside Aggreko, we have a field management team that monitor remotely, all our tanks linked 24 seven. Monitoring fuel tanks is not a problem, but as the numbers increase this task becomes complex. So fieldologists or a high logistics span can occur. If there were an automated cost of the remaining hours of view for the complete fleet of tanks. Help us to plan the logistics and inform the customer of the shipment times in advance. Moving to the solution, I would like to start with architecture. This architecture, integrates a set of technologies in a manner that brings reliability, scalability, and robustness to the insight that we provide to the business. Initially, we find in a bottom layer, Delta Lake, where our data lands and is stored. Information here is available in rough format or trans format, depending on our needs. On top of that, we have a layer that contains a set of internal libraries and external libraries. This library is confidentiality, that will help us during development. Next we have the Data Acquisition and Modeling pipeline. This layer is composed of multiple Data Bricks not books, which make use of the libraries in the previous layer. These little nooks contain the workflow from the data ingestion to the model deployment. Finally, in the top layer, there are orchestrators. This orchestrator oversees the updating of the models. For example, in this case, if the model find tanks that have passed their separate threshold of remaining hours of fuel, the orchestrator will trigger a signal to this messaging interface to distribute the tank information to the different system. So, let’s continue with more detail and how the model works. As any other data science pipeline, this alarm system starts with ingestion of historic information. We do this per asset and use up to two months of data. The next stage is to do a basic analysis and some quality checks. Since our data has some caveats, and this is where our collaboration with the subject matter expert has helped us to understand anomalies in the data. For instance, fuel expands with heat. So the fuel level sensor will read this expansion as refueling. This means that an environmental operator could add noise to the readings that we receive. So after cleaning the data, we build a set of the spoken features and fit them into the model. The model is based on a classic multivariate regression algorithm. It is able to predict the remaining hours of the tank, assuming that the tank will be continually used it until emptied. When the remaining hours is below a given threshold, it triggers an alarm as I mentioned before. The threshold had been established in collaboration with our team of experts. Once the trigger reaches the messaging interface, it will start, in a zero logic app that will broadcast the results. These results are distributed in a slightly different manner for H O per system. For instance, in one hand AMS allows only the alarms where the client is paying for remote monitoring or Agrekko is responsible for refueling. In the other hand, the power BI dashboard will contain the prediction for all the time late. Therefore, the field management team will receive this information and use it to issue them for the price of fuel. In this way, currently we are minimizing the disruption to the customer and logistics cost to Agrekko. Now, I would like to do a quick demo of the power BI dashboard for this use case. In the first page, there is some general information about the motivation behind the field prediction dashboard and the project in general. It also contains information of the data that we use, and how the model works. Moreover, you can find information of the refreshing rate for the report and the prediction models after you produce it in different systems. And finally, you will find information on how to use the filters and the dashboard in general. Moving to the second page, in this page we find the field tank information table. It contains information about the complete tank fleet currently on hire. It will contain the tailored information about the assets, their asset number, who is responsible for refueling that asset. If the customer is paying for remote monitoring, the tank capacity, the latest fuel lever reading that we have, the latest predicted remaining hours of fuel, the battery percentage of the fuel level sensor, the agreement number, the of hire date, and last communication from the IOT to Delta Lake. It also contains information about the customer, service center, region and location. But I’m not showing that due to its sensitivity. In the right-hand side, we find some different filter criteria that we can use to tailor this dashboard view to our needs, as the asset number, agreement number, region among others, the color of the rows represent assets that have passed the current threshold, and the system has had an alarm for them. If we select an asset and drill through, we will find a few uploads of this asset. In this page, we are going to find the field tank level plot. It is showing us the last two months of consumption data for this tank. It also contains the prediction that we have done for the tank if it’s used continually until empties. In the bottom, we have a new slicer that allows us to see the plot in more detail. For example, it shows us that this asset has been affected by high ambient temperature. As the fuel level sensor is showing the small bits of consumption and refueling. In the right-hand side, we find extra information about the tank. As the predicted time has come when the tank is going to run out of fuel, the last 24 hours of consumption after the data has been cleaned. The predicted hours reminding of you, and the estimated consumption rate per hour of the asset. And with this page, I would like to finish the demonstration of the power BI dashboard, and hand it back to Andy.

– So thanks Helena. So, just to summarize some of the points we’ve discussed today. So first of all, hopefully we’ve shown how Aggreko analytics is really embedded as part of our operational business, and collaboration is at the heart of our strategy and critical to our success, especially for data science and machine learning projects. And that that success is critical on our supporting established processes, so that we ensure uptake, feedback and value. We discussed how our global presence, or 24 seven operations, means we have to provide insight in multiple different ways, dynamically as issues arise, and be flexible with the type of outputs we’re providing. And our platform and tool choices are designed with that flexibility in mind. Finally, we’ve taken you through particular examples, showing how our machine learning models are driving real value. And we’re doing that to the tune of millions of dollars of value annually. The production driven logistics solution that Helena demoed, showed how we’ll use machine learning and data to drive an improvement in our operational efficiency, and our levels of customer service. So with that, I’d like to thank you all for joining, and open up for any questions.


 
Watch more Data + AI sessions here
or
Try Databricks for free
« back
Helena Orihuela
About Helena Orihuela

Aggreko

Helena is a data scientist and machine learning engineer with several years experience building robust and scalable products that use data to create actionable insights. Her work at Aggreko has recently been focused on designing and optimising our ML development process and contributing to our general MLOps strategy. She leads the delivery of our predictive maintenance program of work.

About Andy McMahon

Aggreko

Andy leads a team of data scientists and machine learning engineers who work on advanced analytics solutions for use across the business. The team utilise large scale cloud computing capabilities like Databricks and Azure Delta Lake to deliver bespoke algorithms for complex use cases like predictive maintenance or revenue forecasting.