gilliland2

Beginning this December, the Institute of Business Forecasting is proud to publish a new blog series of case studies on Forecast Value Added (FVA). Michael Gilliland, Product Marketing Manager for SAS will be leading this series as he interviews forecasters who have applied the method of FVA analysis within their organizations, and report on their findings.  Many of the forecasters interviewed have spoken publicly about FVA at IBF conferences or elsewhere, although some will be sharing their stories for the first time here on this blog.

The inaugural FVA interview will be with Jonathon Karelse, co-founder of NorthFind Partners. Jonathon first utilized FVA analysis while implementing a demand-driven global planning process at Yokohama Tire Canada. Through his consulting work with NorthFind Partners, he now delivers an innovative use of FVA, along with other tools like the Comet Chart and Forecastability Matrix, to a broad range of clients in global manufacturing and distribution.

If you have questions about FVA, or you have experience implementing and would be interested in sharing your story, please contact Michael Gilliland directly at mike.gilliland@sas.com.

A Sneak Peak…….

___________________________________________________________________________

What is Forecast Value Added?

Forecast Value Added (FVA) is defined as,

The change in a forecasting performance metric that can be attributed to a particular step or participant in the forecasting process.

For example, suppose your forecasting software has achieved a MAPE of 40%, and after management adjustments the MAPE is reduced to 38%. We would say that the management adjustments have “added value” since they reduced forecast error by two percentage points.

The purpose of FVA analysis is to determine which forecasting process activities are failing to improve the forecast, or are making it worse. Consistent with a “lean” approach, the objective is to identify and eliminate non-value adding activities from the forecasting process, freeing resources to be utilized for more productive activities.

Forecasting can be a highly politicized process, with each participant bringing their own biases and personal agendas. It is quite common for companies to find that their elaborate forecasting processes, full of touch points for human intervention, simply make the forecast worse. We’ll see plenty of examples of such waste throughout the blog series.

How to Conduct FVA Analysis

FVA analysis begins by mapping each sequential step in the forecasting process, and then tracking the results at each step. A common process includes these steps:

Software Forecast / Analyst Override / Consensus Override / Executive Override

Although some systems only keep the latest adjusted forecast, a thorough FVA analysis requires that data be kept for each step in the forecasting process. You might find that the first two steps achieved MAPE of 40% and 38% respectively, but the Consensus Override was 39%, and the Executive Override 42%. Such results suggest that the final two steps are just making the forecast worse. But by only keeping the latest adjusted forecasting (without tracking the results at each process step), you would never know.

The Naive Forecast

FVA analysis also requires comparison of process performance to a “naive” forecast. Per the IBF Glossary, a naive forecast is something simple to compute, requiring the minimum of resources. Traditional examples are the random walk (aka “no change” model, where your last observed value becomes your forecast), and the seasonal random walk (e.g., use the actual from the same period a year ago as the forecast for this year).

Utilizing a naive forecast requires virtually no effort or cost, so a reasonable expectation is that your forecasting process (which probably does require considerable effort at significant cost) should result in better forecasts. But until you conduct FVA analysis, you don’t know this.

A very disturbing reality is that many organizations forecast worse than if they just used a naive model. A recent study of eight supply chain companies by Steve Morlidge of CatchBull (“FVA and the Limits of Forecastability” presented at the IBF-Amsterdam 2013) found that 52% of their forecasts were worse than using a random walk. So for all the time and money spent on forecasting, over half the time these companies forecasted worse than if they had done nothing and just used the naive model.

For more information you can download the whitepaper “Forecast Value Added Analysis: Step-by-Step”.

Not a member of the Institute of Business Forecasting?  You’re missing out on networking, member discounts and the included subscription to the Journal of Business Forecasting (JBF) and so much more. Click HERE to learn more.