Naive Forecast

Short answer: No.

I have long argued against arbitrary forecasting performance objectives, suggesting instead that the goal should be “Do no worse than a naive forecast.” We don’t know in advance how well a naive forecast will perform, so we can’t in advance set a numerical performance goal. But we can track performance over time, and determine whether we are meeting this (woefully pathetic sounding) objective.

Can You Actually Beat A Naive Forecast?

A recent LinkedIn discussion on this topic wove through 20 separate comments, there was an exchange between Sam Smale and Richard Herrin that I’d like to address. Sam pointed out, quite correctly, that the goal of being “better than a naive model” could make life too easy. It is true that just about any decent statistical forecast model should forecast better than a random walk (the standard naive forecasting model). However, due to the biases and politics and personal agendas that plague most organizational forecasting processes, I believe that beating a random walk is still a legitimate “first test.”

(Note: If your forecasting process is doing worse than a naive forecast, it is probably pretty bad!)

Richard then turns the conversation toward slightly sophisticated models, perhaps including trend or seasonality. Is it appropriate to consider these as “naive” models against which to do comparisons?

There is a very important role for these “slightly sophisticated” models that Richard brings up. They fall between the random walk and the more sophisticated models we typically use for business forecasting. Let’s call these “default” models.

A default model is something simple to compute, that you could actually use to run your business.  This is the important distinction. A default model is simple to compute (like a random walk is), but you would never use a random walk to run your business because of the instability of future forecasts.

The Problem With Naive Forecasts

Recall that with a random walk, whatever is the most recent observation becomes your forecast for all future periods. If you sold 100 last month, your forecast for all future months is 100. If you sell 500 this month, the forecast for all future months is changed to 500. If you sell 10 next month, the forecast for all future months changes to 10. This is problematic! You would not want to whipsaw your supply chain with such radical changes to the forecast in each future period.

To conclude, even if your forecasting process does worse than a random walk (the “first test” of process performance), you would never want to start using the random walk as your forecast. A more useful “second test” would be to compare your performance against a slightly sophisticated “default” forecast (e.g. moving average, single exponential smoothing, etc.). It would still be a reasonable (if slightly more challenging) goal to beat the default forecast. And if your process were failing to do so, you could simply ignore the process and start using the default.