How to Accelerate Demand Planning From 4.5 Hours to Under 1 Hour With Azure Databricks
The importance of supply chain analytics
Rapid changes in consumer purchase behavior can have a material impact on supply chain planning, inventory management, and business results. Accurate forecasts of consumer-driven demand are just the starting point for optimizing profitability and other business outcomes. Swift inventory adjustments across distribution networks are critical to ensure supply meets demand while minimizing shipping costs for consumers. In addition, consumers redeem seasonal offers, purchase add-ons and subscriptions that affect product supply and logistics planning.
Supply chain analytics at ButcherBox
ButcherBox faced extremely complex demand planning as it sought to ensure inventory with sufficient lead times, meet highly-variable customer order preferences, navigate unpredictable customer sign-ups and manage delivery logistics. It needed a predictive solution to address these challenges, adapt quickly and integrate tightly with the rest of its Azure data estate.
“Though ButcherBox was cloud-born, all our teams used spreadsheets,” said Jimmy Cooper, Head of Data, ButcherBox. “Because of this, we were working with outdated data from the moment a report was published. It’s a very different world now that we’re working with Azure Databricks.”
How ButcherBox streamlined supply chain analytics
ButcherBox uses Azure Databricks to generate its Demand Plan. When Azure Data Factory (ADF) triggers the Demand Plan run, Azure Databricks processes supply chain data from Azure Data Lake, vendor data and Hive caches. New outputs are stored in a data lake, then Azure Synapse updates Demand Plan production visualizations.
ButcherBox leverages Azure Databricks to ingest all real-time streams of raw data from vendors, internal sources and historical data. Azure Databricks reconciles this data into item, box and distribution levels for users to view demand for the upcoming year. This data is then used for retention modeling, and pushed to Azure Synapse for historical comparison.
Apache Spark SQL in Azure Databricks is designed to be compatible with Apache Hive. ButcherBox uses Hive to cache data from CSV files and then processes the cached data in Azure Databricks, enabling Demand Plan calculation times to decrease from 4.5 hours to less than one hour. This enabled an updated Demand Plan to be available for business users every morning to aid decision-making. Ingestion of these data streams also created trustworthy datasets for other processes and activities to consume. These new tools and capabilities helped ButcherBox quickly understand and adjust to changes in member behavior, especially in the midst of he COVID-19 pandemic.
Create your first demand forecast using Azure Databricks
To get started using Azure Databricks for demand forecasts, download this sample notebook and import it into your Azure Databricks workspace.
Step 1: Load Store-Item Sales Data
Our training dataset is five years of transactional data across ten different stores. We’ll define a schema, read our data into a DataFrame and then create a temporary view for subsequent querying.
from pyspark.sql.types import *
# structure of the training data set
train_schema = StructType([
StructField('date', DateType()),
StructField('store', IntegerType()),
StructField('item', IntegerType()),
StructField('sales', IntegerType())
])
# read the training file into a dataframe
train = spark.read.csv(
'/FileStore/tables/demand_forecast/train/train.csv',
header=True,
schema=train_schema
)
# make the dataframe queryable as a temporary view
train.createOrReplaceTempView('train')
Step 2: Examine data
Aggregating the data at the month level, we can observe an identifiable annual seasonality pattern, which grows over time. We can optionally restructure our query to look for other patterns such as weekly seasonality and overall sales growth.
Step 3: Assemble historical dataset
From our previous loaded data, we can build a Pandas DataFrame by querying the “train” temporary view and then remove any missing values.
# query to aggregate data to date (ds) level
sql_statement = '''
SELECT
CAST(date as date) as ds,
sales as y
FROM train
WHERE store=1 AND item=1
ORDER BY ds
'''
# assemble dataset in Pandas dataframe
history_pd = spark.sql(sql_statement).toPandas()
# drop any missing records
history_pd = history_pd.dropna()
Step 4: Build model
Based on exploration of the data, we will want to set model parameters in accordance with the observed growth and seasonal patterns. As such, we opted for a linear growth pattern and enabled the evaluation of weekly and yearly seasonal patterns. Once our model parameters are set, we can easily fit the model to the historical, cleansed data.
# set model parameters
model = Prophet(
interval_width=0.95,
growth='linear',
daily_seasonality=False,
weekly_seasonality=True,
yearly_seasonality=True,
seasonality_mode='multiplicative'
)
# fit the model to historical data
model.fit(history_pd)
Step 5: Use a trained model to build a 90-day forecast
Since our model is trained, we can use it to build a forecast similar to the one ButcherBox uses in their Demand Plan. This can be done quickly using historical data as shown below.
# define a dataset including both historical dates & 90-days beyond the last available date
future_pd = model.make_future_dataframe(
periods=90,
freq='d',
include_history=True
)
# predict over the dataset
forecast_pd = model.predict(future_pd)
display(forecast_pd)
Once we predict over the future dataset, we can produce general and seasonal trends in our model as graphs (also shown below).
trends_fig = model.plot_components(forecast_pd)
display(trends_fig)
Learn more by joining an Azure Databricks event and get started right away with this 3-part training series.