Last Month, I had the opportunity to attend IBF’s Business Forecasting & Planning Academy held in Las Vegas. I recently shared some insights from the first day of the program. Day 2 was similarly eventful. Here are some highlights.
Forecast Error
The first session I attended on Tuesday was “How to Measure & Reduce Error, and the Cost of Being Wrong” an advanced session presented by Dr. Chaman Jain from St. John’s University. Dr. Jain reviewed the basic methods and mechanics of how to compute forecast error and the pros and cons of each technique. It was interesting that IBF has found that more and more companies are moving from MAPE (Mean Absolute Percentage Error) to a Weighted MAPE (WMAPE) to focus their attention on errors that have a relatively larger impact or little to no impact at all. Standard MAPE treats all errors “equally”, while WMAPE places greater significance on errors associated with the “larger” items. The weighting mechanism can vary, typically unit sales are used, but I was intrigued by the notion of using sales revenue and profit margin as well. If a company has low volume items but they are big revenue and profit items, they would not want to miss an opportunity to focus attention on why they have significant errors on these items.
Another interesting concept that Dr. Jain discussed was the use of confidence intervals around error measurements. Many companies report their error measurement as a single number and rarely present the error measure in terms of a range of potential errors that are likely. Having a view into the potential range of errors can allow firms to exercise scenario planning to understand the impact to supply chain operations and the associated sales based upon multiple forecast errors instead of a single number.
My last takeaway is related to the question of how much history should be used to support time series analysis. Dr. Jain stated, and I believe rightly so, that it depends. Are there potential seasonality, trend, business cycles, or one-time events? How much does one need to see these? What if the past is really not a good indicator anymore of the future? What if the drivers of demand for a product have substantially shifted? One technique suggested that seems sound is to test the forecasting model’s performance using different periods of historical data. Use a portion of the history to build the model, and the remaining portion to test the accuracy of the forecast against the actuals held out of model construction. Try different lengths until you find the one that has the lowest error and also allow the process to have different history lengths for each time series forecast.
Lean Forecasting & Planning
Next I attended another advanced session led by Jeff Marthins from Tasty Baking Company/Flowers Foods on “Lean Forecasting & Planning: Preparing Forecasts Faster with Less Resources”. The session focused on doing more with less, a common theme that has permeated the business world these last several years. Marthins’ session was really about how to focus on what matters in demand planning: looking at the overall process, agreeing to and sticking with the various roles and responsibilities in the process, and understanding how the resulting forecasts and plans are to be used by various consumers in the business which drives the level of detail, accuracy and frequency of updates.
To gain an understanding of the demand planning process, Marthins asked the participants to look at a picture of his refrigerator and answer “Do I have enough milk?” This relatively simple, fun question elicited numerous inquiries from the participants around consumption patterns, replenishment policies and practices, sourcing rules, supplier capacity and financial constraints that illustrated the various types and sources of information that are required to develop a solid, well-thought-out demand plan. It was a very effective approach that can be applied to any product in any company.
To illustrate the need to understand the level of accuracy required of a forecast, Marthins used the weather forecast. How accurate is the weather forecast? How often is it right? How precise does it need to be? Once we know the temperate is going to be above 90 degrees fahrenheit, does it matter if is 91 or 94 degrees? Is there a big difference between at 70% chance of rain or an 85% chance of rain? What will you do differently in these situations with a more precise weather forecast? Should I plan to grill tonight? Will I need to wear a sweater this evening? Can we go swimming? If the answer is nothing, then the precision does not really matter and spending time and effort creating or searching for greater forecast accuracy is a “waste” and wastes should be eliminated or reduced in Lean thinking. Marthins also stressed the value of designing your demand planning process with the usage of information in mind. Adopting a Forecast Value Add (FVA) mentality to assess whether each step in your forecasting and demand planning process is adding value will help to accomplish this. Start by asking if the first step in your forecasting process results in greater accuracy than a naïve forecast such as using the same number as last time you forecasted, or a simple moving average? When your accuracy improves with each step in the process, is it worth the effort or time it takes? Can I be less accurate and more responsive and still not have a negative impact? If I can update my forecast every day with 90% accuracy versus once a week with 92% accuracy, or once a month with 96%, which is better? How responsive can I be to the market by making daily adjustments that are nearly as accurate as weekly ones?
In yet another session, the topic of scenario analysis was raised. The team at IBF are getting this one right making sure it is discussed in multiple sessions. What I wonder is how many companies are adopting scenario analysis in the demand planning and S&OP processes? From my experience it is not the norm. Marthins suggested testing the impact of various forecasts, and hence forecast accuracies, would have on supply chain performance and even using scenario analysis to understand if a systematic bias, either high or low, might make sense. I have known companies that have employed the policy of allowing overestimating to ensure their resulting demand plan was on the high side. Carrying more inventory even with all the associated costs was of greater benefit to the company than a lost sale or backorder. Bias is not a bad thing if you understand how it is used and its resulting impact, just like inventory is not an evil when used in a planned and methodical manner.
[bar group=”content”]
Data Cleansing
After lunch I attended my second session delivered by Mark Lawless from IBF “Data Cleansing: How to Select, Clean, and Manage Data for Greater Forecasting Performance”. As in any analytical process, the quality of the inputs are crucial to delivering quality results. Unfortunately I had another commitment during the session and I could not stay for all of it.
Lawless discussed a variety of ways to look at the data available, decide if it should be used, update or modify it, fill in missing values and apply various forecasting techniques. Simple reminders and tips such as consideration and awareness for how data is provided in time periods, e.g., fiscal months (4/4/5) or calendar months, and how they should be reported was a good reminder to make sure the data inputs are clearly understood as well as how the output from the forecasting process will be used.
While most of what I heard was related to the data going into the forecasting process, Lawless did spend time talking about various analytics associated with assessing the output of the process. You might be expecting me to talk about various error and bias metrics again but that is not the case. Rather, the idea is to look at the error measurement over time. What is the distribution of errors? Do they have a pattern or random? If there is a pattern, there is likely something “wrong” with the forecasting process. It made me think about the application of Statistical Process Control (SPC) techniques that are most often applied to manufacturing processes but can be applied to any process. SPC control charts can be applied to check for patterns such as trends, systematic sustained increases, extend periods of time at unexpected very high or very low errors, randomness of errors, and many more. It gets back to the notion that in order to improve the quality of the demand planning process it must be evaluated on a regular basis and causes for its underperformance understood and corrected as much as possible or warranted.
Regression Analysis/ Causal Modeling
The final advanced session of the Academy was delivered by Charles Chase from the SAS Institute on “Analytics for Predicting Sales on Promotional Activities, Events, Demand Signals, and More”. This session was about regression modeling on steroids. As someone who has used regression models throughout my career I could easily relate to and appreciate what Chase was discussing. In two hours Chase did a great job exposing attendees to the concepts, proper use, and mechanics of multivariate regression modeling that would typically be taught as an entire course over weeks.
While time series models are a staple used to forecast future demand, they provide little to no understanding of what can be done to influence the demand to be higher or lower. They can be used to decompose the demand into components such as trend, seasonality and cycles which are important to understand and respond to. They are focused on the “accuracy” of the predicted future. Regression models however describe how inputs effect output. They are an excellent tool for shaping demand. Regression models can help us understand the effect internal factors such as price, promotional activity, and lead-times, as well as external factors such as weather, currency fluctuations, and inflation rates have on demand. The more we can create predictive models of demand based on internal factors the more we can influence the resulting demand as these factors are ones we control/influence as a firm. If external factors are included, forecasts for the future values of these inputs will be needed and we become more reliant on the accuracy of the input forecasts to drive our model demand.
In case you missed it, you can see pictures from the 2015 IBF Academy HERE.
I trust I have brought some insight into IBF’s recent Academy in Las Vegas and perhaps offered a nugget or two for you to improve your forecasting and demand planning activities. If only I would have learned something to apply forecasting success at the gaming tables :).