Mike Gilliland

Below details Questions & Answers from IBF’s Webinar “What Management Must Know About Forecasting.”  If you missed it, no worries.  You can view it complimentary by clicking HERE.

1. If a product is not forecastable, what’s the most appropriate step to move the product to become forecastable?

Answer: The most effective way to improve forecast accuracy is to “make the demand forecastable” and a great way to do that is to lower the volatility of demand.  Most of what we do with our organizational policies and practices is to add volatility.  We encourage our customers to buy in spikes, and we encourage our sales people to sell that way.  This is completely contrary to quality management practices, which are all about removing volatility and making everything more stable and predictable.

Review sales and financial practices that are encouraging volatility, and either re-engineer or eliminate them and replace with practices that encourage everything to operate more smoothly.  (Examples of practices that encourage volatility are pricing and promotional activities, and the quarter end “hockey stick” to meet short term revenue goals.)  You should question whether these sorts of practice make sense by contributing to the long term profitability of your business.  If not, pursue ways to reduce volatility and encourage smooth, stable growth.  This will allow you to forecast more accurately and will reduce overall costs, which you can then pass along to your customers.

2. All of this is relative to the base line forecast, correct? What if your items are heavily promotional driven?

Answer: The accuracy of a naïve forecasting model serves as the baseline against which the performance of alternative forecasting methods should be compared.  Thus if the naïve model (say, a moving average) achieves MAPE of 40%, then I want to know how well my statistical model is forecasting, and how well my overall process is forecasting, and compare them to the baseline of 40% MAPE that the naïve model delivered.  This is what I’m talking about as a “baseline.”

This should not be confused with what is commonly called the “baseline” forecast when you try to distinguish baseline demand from promoted demand.  How do you know what demand was baseline and what was due to the promotion?  How do you distinguish the two?  I don’t believe that you can distinguish the baseline demand from promoted demand in a clean or easy or certain manner, so I would suggest not bothering trying to do so.  What matters is “how much total demand is there going to be.”  It isn’t necessary for me to care how much of it is “baseline” and how much of it is due to “promotion” – and I can never know for sure anyway?  Don’t assume you are making your forecasts  more accurate by trying to distinguish the two – you may just be making things more complex.

3. What is FVA?  A tool?  Expert judgment?  Or what?

Answer: Forecast Value Added is a metric, defined as the change in a forecasting performance metric (such as MAPE, forecast accuracy, or bias), that can be attributed to a particular step or participant in the forecasting process.  When a process step or participant makes the forecast more accurate or less biased, they are “adding value.”  FVA is negative when the step or participant is just making the forecast worse.  FVA analysis is the method of reviewing the performance of your process and identifying those non-value adding (or negative-value adding) activities that should be eliminated.  For more information on FVA analysis, see the webinar “Forecast Value Added Analysis: Step-by-Step” or the accompanying white paper.  You are also encouraged to attend the IBF Supply Chain Forecasting conference in Phoenix, February 22-23, 2010, to learn how to do FVA and hear case studies about several organizations (such as Intel) that are using this method.

4. What are the methods used commonly to measure Forecast Accuracy?  (Is MAPE the most common?) And what is a good process to determine forecast accuracy?

Answer: Mean Absolute Percent Error (MAPE) or its variations like Weighted MAPE or Symmetric MAPE seem to be the most popular metrics of forecasting performance.  MAPE has many well known limitations (such as being undefined when the denominator (the Actual demand) is zero), and is not suitable for use with data with a lot of zeroes (intermittent demand).  Also note that with MAPE you can have absolute errors greater than 100%, so you cannot simply define forecast accuracy as 100% – MAPE.

For management reporting I use a “Forecast Accuracy” (FA) metric, defined as:

1 – { Σ | Forecast – Actual |  /  Σ Maximum (Forecast, Actual) }

Note: FA is defined as 100% when both Forecast and Actual are zero.

By using Maximum of Forecast or Actual in the denominator, FA is always scaled between 0 and 100%, so it is very easy for management to understand.  That is why I favor it, even though some professional forecasters are very critical of this metric.

5. What are your perspectives on how do you differentiate volatile demand from uncertain demand?  In my opnion, uncertainty is related to an event and volatility is related to demand fluctuations. Is that right?

Answer: Volatility is expressed by the Coefficient of Variation (CV), which is the standard deviation divided by the mean.  For example, look at the last 52 weeks of sales, and compute the CV of that pattern.  In general, the more volatile (i.e. erratic and variable) the demand, the more difficult it is to forecast accurately.  Recall the Accuracy vs. Volatility scatterplot in the webinar.

Sometimes we can forecast volatile demand quite accurately, where there is structure to the volatile pattern.  You might see this for highly seasonal items, where you can always count on a big spike in demand at a certain time.  (E.g. bunny costumes and egg painting kits before Easter.)  Note: I’m not claiming we can forecast bunny costume or egg painting kits accurately, just using them as an illustration of volatility due to seasonality.

Volatility is measured looking back at what really happened.  If we expect high volatility to continue, we would probably have less confidence or certainty in our future forecasts.  If volatility is very low, we can probably feel more secure (and certain) of our forecasts.

6. Is there any ratio to determine the horizon for the forecast to be measured?  Any industry correlation to lead times?

Answer: Forecasting performance should be reported relative to the supply lead times.  Thus, if it takes 3 months to make changes in your supply, you should measure the accuracy of your forecasts made 3 months in advance. Once inside this lead time, it is ok to continue to make adjustments to the forecast, and many companies even report their forecast accuracy based on a forecast immediately prior to the period being forecast.  (Some companies even allow adjustments to the forecast within the time period (e.g. week or month) being forecast – and then report that as their forecast accuracy.)  However, it is the forecast made at the lead time that really tells you how well (or how poorly) you understand your business.  Don’t congratulate yourself on good forecasts made within the month being forecast!
Regarding forecasting horizon – how far into the future you should forecast – this will vary based on your business needs.  A power company forecasts years (even decades) ahead to know if it will need to make capital investments in new power plants.  For most companies, forecasting 12-18 months ahead is sufficient.  And the forecasting process should always be “rolling,” so that you always maintain that horizon of forecasts ahead of you.

Routinely doing 5-year ahead forecasts if you don’t really need them seems like a silly exercise.  If management insists on forecasting farther ahead than you really need, don’t waste much time doing it.  It is very unlikely you can forecast very accurately that far ahead.  It is much better to keep your organization nimble and able to adapt to however your market changes over time, rather than fool yourself into thinking you can accurately predict that far into the future.

7. How can you do calculate “appropriateness for forecasting” when your time series is too short for out-of-sample testing?

Answer: When there is enough data, out-of-sample testing is a great way to help evaluate and select forecasting models.  Good software, such as SAS Forecast Server, allows you to define and utilize a holdout sample in your model selection process.  Poorly designed software will select a model based solely on “best fit” to recent history, and as was illustrated in the webinar, the best fitting model may be a very poor choice for generating forecasts.

When there is not enough history to use a holdout sample, the appropriateness of a model is based on the judgment, experience, and domain expertise of the forecaster.  In the webinar example, Model 4 fit the history perfectly, but the forecast exploded to huge values which probably weren’t realistic (unless you had domain knowledge that demand would be significantly increasing, you were rolling out to new regions, etc.).  Without any other information, using the mean (Model 1) or a simple trendline (Model 2) seemed to be “most appropriate.”

8. Statistical modeling can be difficult in planning service parts demand. Can you give further input for planning service demand volatility.

Answer: Demand for service parts if often intermittent, with lots of periods of zero demand.  Intermittent demand is difficult to forecast accurately.  Although there are various methods to help you forecast and manage inventory in these situations (see Croston’s method and its variations), you should not have high expectations for accuracy.  It may be easier (and just about as effective) to simply forecast the mean demand each period.

Sometimes there is sufficiently high demand for the service parts that you could use standard time series methods to forecast.  It may be helpful to incorporate known sales of the items requiring the parts, so you can base your forecasts on failure rates.  Thus, if you know 100,000 units of a product were sold, and that 10% require servicing every year, this could help you decide that about 10,000 of the service parts will be needed each year.

One other approach, more applicable to high value machinery (e.g. jet engines, ships, factory production lines), is knowledge of the routine maintenance schedule.  If you sell 1000 jet engines and the maintenance schedule says a part is replaced every 6 months, then you can use this to forecast demand for that part.

9. Do you have examples available of cost of inaccuracy metrics?

Answer: I do not have access to the Cost of Inaccuracy metric used at Yokohama Tire Canada by Jonathon Karelse.  However, Jonathan will be speaking at the IBF’s Demand Planning & Forecasting: Best Practices Conference in San Francisco (April 28-30), so you could follow up with him there.

IBF members have access to a cost of inaccuracy spreadsheet available on their website.  Also, analyst firm AMR has published research (which you could access if you are an AMR subscriber) on the costs of forecast inaccuracy.
Any such cost calculators are based on a number of assumptions which you provide, so be cautious in your use of them and in your interpretation of the results.  Personally, I’m very skeptical of claims such as “Reducing forecast error 1% will reduce your inventory costs x%.” If nobody in your organization trusts your forecasts now, reducing the error by 1% is not going to make anybody more trusting of the forecast, and they won’t change any behavior, so you won’t reduce inventory.  It may take more substantial improvement to reap the cost benefits.

10. Does anyone work in the call or contact center environment for an inbound customer service center?

Answer: These principles can be applied to forecasting demand for services, such as in forecasting needs for call center staffing.  The major difference is the time bucket that is of interest.  Call centers often forecast in 15 or 30 minutes increments (rather than in weeks or months for a manufacturer), to make sure they are sufficiently staffed during peak call periods, and not overstaffed during the low call times.

Michael Gilliland
Product Marketing Manager, SAS
IBF Board of Advisor

See MICHAEL GILLILAND & EMILY RODRIGUEZ from INTEL Speak in Phoenix at IBF’S:

$695 USD for Conference Only!

February 22-23, 2010
Phoenix, Arizona USA