3 Sources Of Forecast Error To Avoid


Those seeking to reduce error can look in three places to find trouble: The data that go into a forecasting model, the choice of a forecasting method, and the organization of the forecasting process. Let’s look at each of these elements to understand where error can be introduced into forecasting so we can mitigate it and improve our forecast accuracy.

1. Error Caused by Data Problems

Wrong data produce wrong forecasts. I have seen an instance in which computer records of product demand were wrong by a factor of two! Those involved spotted that problem eventually, but a less obvious error – but still damaging – can easily slip through the cracks and poison the forecasting process. In fact, just organizing, acquiring, and checking data is often the largest source of delay in the implementation of forecasting software. Many data problems derive from the data having been neglected until a forecasting project made them important.

Data Anomalies

Even with perfectly curated forecasting databases, there can be wildly discrepant  – though accurately recorded – data , i.e., anomalies. In a set of, say, 10,000 products, some items are likely to have endured strange things in their demand histories. Depending on when the anomalies occur and what forecasting methods are in use, anomalies can drive forecasts seriously off track if not dealt with.

2. Error Caused by the Wrong Forecasting Method

Traditional forecasting techniques are called extrapolative methods because they try to find any patterns in an item’s demand history and project (extrapolate) that same pattern into the future. The most used extrapolative methods go by the names of exponential smoothing and moving averages. There are variants of each type, intended to match the key characteristics of an item’s demand history. Is demand basically flat? Is there a trend? Is there a seasonal cycle?

However, where there is choice, there is the possibility of error. Choosing an extrapolative method that misses trend or seasonality is sure to create avoidable forecast error, so is one that wrongly assumes trend or seasonality.

“Using classical extrapolative methods on intermittent data is asking for trouble.”

Further, extrapolative methods are designed to work with data that are “regular,” which is to say non-intermittent. Intermittent data have a large percentage of zero demands, with random non-zero demands mixed in. Spare parts and big-ticket, slow-moving items are usually intermittent.

High-volume items like CPG products are usually non-intermittent. Intermittent demand data requires specialized forecasting methods, such as those based on Markov modeling and statistical bootstrapping. Using classical extrapolative methods on intermittent data is asking for trouble.

Even when the assumptions underlying a forecasting method are satisfied by an item’s demand history, the method might still be considered “wrong” if there is a better method available. In some cases, methods based on regression analysis (also called causal modeling) can outperform extrapolative methods or specialized methods for intermittent demand. This is because regression models leverage data other than an item’s demand history to forecast future demand.

“Although regression models have great potential, they also require greater skill, more data, and more work.”

Although regression models have great potential, they also require greater skill, more data, and more work. Unlike extrapolative and intermittent methods, they are not available in software as automatic procedures. The first problem is to determine what outside factors drive demand. Then one must acquire historical data on those factors to use as predictor variables in a regression equation. Then one must separately predict all those predictors. This process demands a level of statistical sophistication that is usually lacking among Demand Planners, opening up possibility for error.

Pro tip: Any proposed statistical forecasting method should be benchmarked against the simplest method of all, known as the naïve forecast. If the data are non-seasonal, then the naïve forecast boils down to “tomorrow’s demand will be the same as today’s demand.” If the data are seasonal, it might be something like “next April’s demand will be the same as this April’s demand.” If a fancy method can’t do better than the naïve method (and sometimes they can’t), then why use it?

3. Error Caused by Flaws in the Forecasting Process

Forecasting always starts out as an individual sport but usually includes a team component. Each phase can go wrong. We’ve already discussed errors caused by individual forecasters, such as deciding to use the wrong model or feeding the model data of poor quality.

Forecasting always starts out as an individual sport but usually includes a team component.

The team component usually plays out in periodic Sales and Operations Planning (S&OP) meetings. In these gatherings, various relevant departments gather to argue out what the company’s official forecast will be. While the aim is to achieve consensus, the result may work against the goal of reducing forecast error.

Participants often come to these meetings with their own competing forecasts. The first mistake may be trying to pick just one as the “official” forecast for all. Various functions – Marketing, Sales, Production, Finance – often have different priorities and different planning tempos. For instance, Finance may need quarterly forecasts, but production might need weekly forecasts.

These differences in forecast horizon imply different levels of aggregation, which can greatly influence the choice of a forecasting method. For example, day-of-week seasonality in demand may be critical for Production but irrelevant for Finance.

Assuming there are competing forecasts at the same time scale, the second mistake may be the way these forecasts are evaluated. At this stage, relative accuracy is usually the deciding criterion. The mistake is not recognizing this as an empirical question that cannot be settled by arguments about relative expertise or sophistication.

Too often, companies do not take the time to acquire and analyze retrospective assessments of forecast accuracy. If the task is to forecast next month’s demand using a certain technical approach, how has that approach been doing? Forecasting software often includes the means to do this analysis, but it is not always exploited when available. If it is not available, it should be made so.

“S&OP meetings often fail when the participants suggest changes to statistical forecasts.”

S&OP meetings often work, or fail, when the participants suggest changes to statistical forecasts. Since statistical forecasts are inherently backward-looking, these management overrides should, in principle, reduce error by accounting for factors like future promotions or market conditions that are not encoded in an item’s demand history. The third mistake is failing to monitor and confirm their value. Many of us believe we have a “golden gut” and can adjust forecasts without risk. Not necessarily true; trust but verify.