It seems that forecast accuracy is shrouded in more mystery than accuracy itself, especially to Executive management. Does the Executive team have a preconceived idea what is acceptable? There is certainly a lot of pressure to deliver? Are their expectations realistic? If you are currently supplying forecast accuracy measurements, does the Executive team truly understand those metrics?
One of the biggest disservices a company can do to those responsible for the forecast is to force a preconceived accuracy level on them. Forecasters should take the lead in removing the mystery, and managing the expectations/ preconceptions of Executive management.
An Executive typically reviews reports at a higher level, and tend not to have the time to go over pages of details. As an example, they are very much at home with the daily sales report, and have a tendency to associate forecast accuracy with the sales to budget percentage that they see daily. That is why when you ask them what they think the forecast accuracy should be, they typically respond with something like “95%”. But that is just a total quantity forecast to total quantity expected sales; what I call the “one number forecast to one number sales”. If you forecast 10 ties and 90 shirts and you sell 95 ties, management may think you have 95% accuracy. But, is that an accurate view of forecasting performance? Reporting a “one number” metric is not going to provide a realistic view of the state of the forecasts.
Multiple metrics should be employed. Some metrics measure accuracy while others measure error. Some metrics reveal tendency and others reveal ranges. The metrics chosen should tell the story of your forecast accuracy. But unless you’re reporting metrics to the VP of Forecasting, who would likely want see every metric under the sun. You should keep it simple enough for the Executive Team to understand, but provide enough information to show the good, the bad, and the ugly. I reference the term “diagnostics” a lot when discussing forecast accuracy. You need to run a battery of tests to know what’s going on under the hood; is there a tendency to over forecast? Is one input to the forecast causing most of the heartburn? Where are the largest variances?
Also, there is a tendency to compare your forecast accuracy or error to other “Benchmarks” and “Standards”, and/or to other companies. There have been a multitude of articles, blogs, reports, and posts written about forecast accuracy/ error benchmarks, and why it’s not appropriate to compare one company to another. However, forecast accuracy is affected by many factors, and every company is different. While it’s important to know what other companies may be achieving, expecting your company to reach an arbitrary goal is unrealistic, and can be detrimental to your process. And plus, while there may be specific formulas for the metric, there are no “Standards” in regards to what data should be used to produce them (orders data, shipment data, etc.).
Your Executive team needs to understand that accuracy measurements should be used as a tool to increase forecast accuracy. As the saying goes, “What can be measured, can be improved”. However, metrics should not be used to point fingers at others. They should also not be used to compare to benchmarks or other companies/ divisions within the same company. “Continuous Improvement” in forecast accuracy results from being measured on an ongoing basis. Changing the mindset and perceptions of others is always a challenge. Hopefully, this blog will be of some help if you ever happen to encounter these situations.
Michael Morris, CPF
Manager, Inventory – Product Service and Parts
Yamaha Corporation of America
Hear Michael Morris speak more about managing executive expectations for forecasting at IBF’s upcoming Supply Chain Forecasting & Planing Conference in Scottsdale Arizona USA, February 23-25, 2014.