“30% forecast accuracy? Seriously? What do I pay you for?  I could flip a coin and get better results than this!” Yes, we hear this as demand planners. And yes, it hurts – deeply, personally, unjustly.

It’s one of the most frustrating and demoralizing feelings as a Demand Planner to know that you’re trying your darnedest to improve the accuracy of something which you know is largely unforecastable. You’ve maxed-out modeling and model-tuning and have resorted to fishing for any judgemental recommendations that you can get your hands on. The latter likely only helping to see accuracy further struggle with snowballing bias and negative FVA of compounding “expert” overrides. Having thrown everything but the kitchen sink at the problem, you shift your efforts to building-up verbal defences with a Pixar-worthy storyboard of empathy-seeking data challenges and culpability-shifting anecdotes. If only there was a way we could prove to management that this product is not actually forecastable….

We Need To Set Management’s Expectations About What Is Forecastable

Management may be aware of standard forecast metrics – many are introduced in MBA programs, SCOR and Operations textbooks. But typically executives are more worried about how quickly they can show improvement in these measurements than how they are calculated.

Similarly, as demand planners, we are trained and certified on the most common algorithms, performance measurement computations, and off-the-shelf forecast modeling data structure requirements. If we are to set expectations about what is actually forecastable, and what we can actually achieve as demand planners, we need to look beyond these basics. If we don’t we will forever be taking unfair criticism for things outside of our control. What we need to do is not only present our forecast accuracy, but present it alongside forecastability. Forecastability reveals the extent to which an SKU can be forecasted, and provides the crucial context for our forecast accuracy.

Forecast accuracy vs. forecast accuracy

Forecast accuracy depends on how forecastable the product is.

Questions To Ask To Gauge An SKU’s Forecastability

What change in forecast accuracy is realized when the best-fit model is recalculated from different assortments of time series horizons?

Are the changes more prevalent for certain model types (hint – they should be for some, especially for more factor-inclusive model types like exponential smoothing)?

What differences in forecast accuracy are observed in monthly, bi-monthly, and quarterly period bucketing?  (Is poor forecast accuracy at the monthly level dramatically improved if consumption and forecast accuracy are looked at in quarterly buckets instead?)

Are any SKU-to-SKU, product line to product line, and product family to product family correlations observed when regression comparisons are run to look for like patterns in the demand history?  Are any of these like patterns accounted for in existing planning bills or bills of materials?

What record counts and financial weighting do the products and model types comprise when categorized into basic segmentation schemas (high value, volatile; high-value, stable; low-value, volatile; low-value stable)?

What are the historic forecastability ranges, within each segments and per product families? (Note: Segmentation can be combined with ABC and Pareto analyses, as well as calculated for markets, customers, or for products within each market/customer.)

Within low-value, volatile records, is inherent demand variability such that the cost of error is more prohibitive than a simple order policy (ex. reorder point or make-to-order)?

By asking these questions, we gain an insight into forecastability – what can be forecasted accurately, and what cannot. We will be able to go the S&OP executive meeting, or sit down with Sales and Marketing, Finance or senior executives, and be able to say with confidence that for a particular SKU, 30% forecast accuracy is a good thing. We can explain why an SKU cannot be accurately forecasted and then make suggestions based on that – after all, knowing that demand for a product cannot be predicted has serious implications for the business. Knowing this allows us as demand planners to mitigate risk and propose the best course of action.

What’s more taking the time to understand what one can forecast and what one cannot, and what results can be expected, one can set better expectations and understanding upfront.

How To Prove That 30% Forecast Accuracy Is A Good Thing

In the opening example, proving that 30% is actually a job well done given the forecastability is important (try re-calibrating your BI tools to show forecast accuracy in terms of variance to CoV). Go one better by showing that the cost avoidance of whatever initiative is actually outweighed by the cost of forecasting for it. If you communicate that the ‘juice-isn’t-worth-the-squeeze’ you can get work off of your plate, allowing you to focus on what matters.

Variability can be lessened by extending the time buckets planned for (daily to weekly to monthly to quarterly to semi-annually), but it is more costly. In certain markets and in certain products, this may be the only option and one that demand planners should constantly be evaluating and influencing. On the flipside is also finding ways to try to improve forecastability by forcing the square peg to better fit into that round hole.

For example, CoV movement over time can be tracked. Where increasing, investigations can be conducted to identify with Commercial colleagues the causes and then the script flipped to challenge what can be done to shape back. Analyzing positive correlating 4P’s effects in more stable products can sometimes yield a playbook to try for your more volatile areas.

Parting Thoughts On Forecastability 

This friendly neighborhood forecaster’s closing reminder is this: You measure how something is setup to execute, and you measure to control or to improve. But if the setup is wrong for the metric, or the metric is wrong for the setup, then you’re allowing the box that you’re in to dictate your success. Break out of the box, see if you need to redesign the box or redesign the metric. In forecasting, one size does not fit all – don’t let spinning on the wheel stop you from asking the question “why?” more often. That is how we understand what is actually forecastable and what isn’t, how to get the credit we deserve, and push the discipline forward.

Stay inquisitive, my friends. That’s the mark of the best Demand Planning professional.