IBF Year End Blog – What we Learned About Forecasting in 2010

Mike Gilliland: The BFD

Our ability to forecast was met with increasing skepticism in 2010 – and this is a good thing.  A decade ago, the thrill of technological innovation provided hope that more data, bigger computers, and fancier models would one day solve all our forecasting problems.  Yet we now have more data, bigger computers, and fancier models than ever before – and our forecasting challenges remain.  Is there no hope?

While we’ve acquired a healthy skepticism this past year, we’ve also come to recognize that some things are forecastable to a reasonable degree of accuracy – and some things are not (at least not to the degree of accuracy desired).  For behaviors that are amenable to statistical forecasting methods, we can focus on automatic model building, and the overall efficiency of our forecasting process. 

Methods like Forecast Value Added analysis allow us to streamline our forecasting efforts by identifying the waste and non-value adding activities in our process.  (The goal being to generate forecasts as accurate as they can reasonably be expected to be, and doing this as efficiently as possible.)  Sales and Operations Planning, with growing adoption and better execution, helps mesh a demand forecast with supply capabilities.  Visibility to demand/supply imbalance, provided through S&OP, lets management act in ways that are most beneficial to the health of the organization – by better (and more profitably) serving its customers.

For behaviors that are not amenable to statistical forecasting methods, recognition of this “unforecastability” is a key first step.  We wisely do not apply super-human efforts to forecast Heads or Tails in the tossing of a fair coin, because we recognize the randomness and our inability to improve upon a simple guess.  In the business world, when we cannot expect an accurate forecast of customer demand behavior, there are still many things we can do so solve the business problem.  These “alternative approaches” just may not involve forecasting.

Supply chain re-engineering is a well-recognized method for reducing an organization’s reliance on highly accurate forecasts.  When a supply chain is more flexible and responsive, it can react to demand as it occurs.  Postponement strategies, where final configuration or packaging of finished product is delayed until the demand signal is received, is one way to accomplish this.

Demand smoothing is an approach for reducing volatility in demand patterns.  Organizational policies and practices, like the quarter end “push,” or ill-designed pricing or promotional activities, can create demand behavior that is more erratic – and therefore more difficult to forecast.  The demand smoothing approach looks for ways to encourage more stable, more forecastable, and (likely) more profitable demand from your customers.

Finally, the pruning of extremely low volume items can lead to surprising growth for the products that remain.  All organizations eventually have their dead weight – those aging products near the end of their lifecycle, or newer products that never caught on.  Rather that encumbering the sales force, ordering systems, warehouse pallet spaces, planning systems, and (most costly) management time dealing with such products, it is often best to just get rid of them.  A CPG manufacturer was able to cut 25% of its product portfolio that accounted for only 0.5% of total sales the previous 12 months.  An apparel manufacturer found that 50% of its items account for only 1% of sales.  A simple Pareto chart (ranking items by volume or revenue) can reveal the opportunity at your organization.

The growing recognition of the limits of forecasting can lead to benefits in two ways.  For the “forecastable” products, attention can be focused on automation and efficiency of the forecasting process – and not wasting effort pursuing unachievable levels of accuracy.  For “unforecastable” products, alternative approaches can help solve the business problem without reliance on better forecasts.  Just because we are forecasters does not mean we can solve every business problem “if we just had a better model.”  As forecasters we can help organizational performance by bringing these issues to management, and guiding them to the right decisions.

Mike is author of The Business Forecasting Deal, serves on the IBF Advisory Board, and moderates IBF’s annual Demand Management Forum at the spring Best Practices Conference (this year in Dallas, May 4-6, 2011).  His blog The Business Forecasting Deal helps expose the myths, the frauds, and the worst practices in business forecasting, while providing practical solutions to its most vexing problems.

3 Responses to IBF Year End Blog – What we Learned About Forecasting in 2010

  1. Well written blog…

    I have a question on forecastability.

    We often compare forecast accuracy among products, categories, business units and even sometimes with competitors. As we all know, it is impossible to compare the performance when business dynamic and environement is different.

    Often in many organisation, we are given a target to achieve our forecast accuracy to a certain level such as 80% this year, 85% next year… We all know in some cases the target is achievable but unrealistic in other cases by gut feeling…

    Is there anyway to quantify the maximum achievable forecast accuracy for a given product/category/business unit…? We all know it is all dependent on how the business is run, what processes are in place, what tool you have, how skillful our demand planners are, how we communicate with our customers… These determine the possible level of accuracy that we could achieve.

    Is it possible to quantify the achievable forecast accuracy target at all by taking into account all the underlying drivers?

    Looking forward to any feedback.

    Regards

  2. Hi Davis,

    Quantifying the maximum achievable forecast accuracy is a difficult problem, and I don’t think there is a perfect answer. We can concoct examples, such as forecasting Heads or Tails in the toss of a fair coin, in which we can determine the limit of forecast accuracy. We can do this because we understand what governs the behavior being forecast (called the data generation process (DGP)), and our accuracy is limited only by the randomness in the tossing of the fair coin. In real life situations we usually don’t know the DGP, the DGP may change over time, and we don’t know the amount of randomness in the behavior.

    There are some good articles on the general topic of forecastability and forecast performance measurement. I discuss these periodically on my blog (http://blogs.sas.com/forecasting) and provide links to some specific articles that may be of use. There is also an article “Setting accuracy targets for short-term judgemental sales forecasting” by Bunn and Taylor in the International Journal of Forecasting 17 (2001) 159-169.

    Since we really don’t know what is the best accuracy that is possible to achieve, I prefer to set performance targets with respect to what is the worst we should be able to achieve. Thus, I would set the goal “Forecast no worse than a naive model” (where a naive model is something cheap and easy to implement, such as a random walk or moving average).

    Since we don’t know in advance what accuracy the naive model will achieve, we don’t set a specific numerical target. Instead, over time, we evaluate our forecasting process accuracy vs. the accuracy the naive model achieved. If our process is doing WORSE than the naive model, obviously something is going very wrong!

Leave a Reply

Your email address will not be published. Required fields are marked *

WordPress Anti Spam by WP-SpamShield