How valuable is forecast accuracy benchmarking? It’s always interesting to see how your competitors’ are faring, but does knowing other companies’ forecast accuracy help improve your own, and does it help to set realistic forecast accuracy targets for your demand planners?

I think there are some lessons we can learn from Montessori here. My 10 year old daughter has attended a Montessori school for over 5 years with amazing results. As a parent, I have always loved their approach and we have seen the philosophy in action. My daughter has blossomed over the last few years.

“A student’s progress should be measured in terms of the questions they are asking, not merely by the answers that they are reciting” (Robert John Meehan).

If you are not familiar with the Montessori method, its foundation is on self-directed learning. Students are free to choose the activities they work on from a range of carefully prepared, developmentally appropriate materials. One of the things that other parents find surprising about Montessori is the fact that they do not give “tests.” The idea behind the test-less approach is not about creating a careless environment but one where each child is recognized as different, and where self-motivation and mastery at their level is the focus.

The same approach can be applied to demand planning. This approach is destroyed by using forecast accuracy benchmarks.

The (Severe) Limitations Of Forecast Accuracy Benchmarking

With this in mind, I was sitting on the couch listening to my daughter talk about the wonderful day she had and how she loves school when I open an email. The sender of the email is asking me for forecast accuracy benchmarks. I get this question a lot and my answer is always the same:

The best benchmark is no benchmark. Stop trying to benchmark forecast accuracy against an industrial average!

Far too often I see annual goals (and even bonuses) tied to an arbitrary number of what someone else at another company is achieving. We treat forecast error and demand uncertainty as a monthly test where it is pass or fail and, measuring against what everyone else is doing.

The obvious truth is, even within the same industry, the items or item combinations are different, the time horizon you are forecasting may vary, market share can impact volume and variation, and a host of other factors like systems and operational limitations and data lead to different levels of forecast error.

Using forecast accuracy benchmarks to set your own targets is like comparing an apple to a grapefruit.

The dirty little secret is that items are different, companies are different, and demand uncertainty should be expected to be different. Using forecast accuracy benchmarks to set your own targets is like comparing an apple to a grapefruit.

Use Forecastability Instead To Set Your Accuracy Targets

Many times, the companies at the top of the benchmark list are there not because they’re the best at forecasting but because they have the easiest demand to forecast. They could be forecasting a lag zero with only 12 items. We need to look at what the individual planner is trying to forecast and the forecastability of that particular item based on its own merits. (Learn how to gauge forecastability here.)

The typical approach is to look at averages, so 30% WMAPE is good, right? If this is the attitude in your planning team your demand planners will realize they can do a lot less work with equal results and never reach their full potential. The forecastibilty of your items could be above average but this benchmarking mindset won’t allow you to improve.

Using forecast accuracy benchmarks sets up demand planners for failure, or stops them from trying to improve

What if you have much more difficult items to forecast, and forecastability is lower than average? You give the demand planner unrealistic goals and set them up for failure.

This is about understanding where each product line or item or customer is different and having self-motivation and mastery at at the center of your approach. A better way is to benchmark the underlying forecastablity of the demand patterns and measure the improvements to their own baselines. To do this you can focus on forecast value added (FVA%) to measure against a naïve model or demand variation index (DVI) of the same data.

FVA Allows You To Improve Forecast Accuracy

The question shouldn’t be if you pass or fail but if the steps we take improve the results and by how much. Measuring FVA begets managing forecast processes because FVA adds visibility into the inputs and provides a better understanding of the sources that contributed to the forecast, so one can manage their impact on the forecast properly.

Companies can use this analysis to help determine which forecasting models, inputs, or activities are either adding value or are actually making it worse. You can also use FVA to set targets and understand what accuracy would be if one did nothing, or what it could be with a better process.

Most of all, FVA encourages mastery and grades you on what you can do or have done instead of what some unknown company with unknown forecastability has done. Benchmarks done incorrectly against industrial averages only can tell us what accuracy the so-called best in class companies achieve and do nothing to test your individuality and what you are capable of doing.

Maria Montessori’s approach is powerful and universal. She would have us not constantly testing ourselves against an arbitrary average, but have us focus on our own individual forecasting processes, and reach mastery in the process.