Interviewer: Michael Gilliland, SAS
This month’s interview is with Shaun Snapp, founder and editor of SCM Focus, where he provides independent supply chain software analysis, education, and consulting.
Shaun’s experience and expertise spans several large consulting companies and at i2 Technologies before staring SCM Focus. He has a strong interest in comparative software design, maintains several blogs, and has authored 19 books, including Supply Chain Forecasting Software and most recently, Promotions Forecasting. He holds an MS in Business Logistics from Penn State University.
I asked Shaun about the application of FVA analysis with his clients.
Mike: What forecasting performance metric are you using (e.g., MAPE, weighted MAPE, forecast accuracy), and at what level do you measure (e.g. by Item / Distribution Center / Week with a 3-week lag)?
Shaun: I really only use MAPE or weighed MAPE. In most cases I am comparing different effects on forecast accuracy, so a relative measure is the most appropriate. As I have to export forecasts and actuals from systems to calculate global figures, weighed MAPE, while certainly the most accurate, is a bit more work to calculate, and of course there are different ways of weighing MAPE, which brings up a separate discussion.
I try to get companies to measure at the Item/DC. I bring up the topic that the relevant duration estimate is over the replenishment lead time. I don’t use any lagging.
Mike: Are you measuring forecast bias? What are your findings?
Shaun: Yes very frequently. My finding is the same as the literature, sales inputs have a consistent bias — which in my clients is not addressed through anything but planner adjustment.
Mike: Are you comparing performance to a naïve model?
Shaun: No. I tend to compare the forecast of my clients against a best fit. I do have an approximation of the percentage of the database which does not need very much forecasting energy, as I know what percentage of the database has a level forecast applied — these are both highly variable items, and very stable items.
My work pretty much stops at getting the system to generate a decent forecast. I don’t have any involvement in what the planners do after that. Most companies I work with have either walked away from the statistical forecast or only use a very small portion of the statistical forecast that are generated. The planners are free to make any adjustment or change the model applied.
Mike: What are the steps in the forecasting processes you see (e.g., stat forecast, analyst override, consensus meeting override, executive approval)? What FVA comparisons are you measuring?
Shaun: I do all of these comparisons for clients. I am trying to understand what the FVA is at each step so poor quality inputs can be de-emphasized and quality inputs can be emphasized.
The bigger problem is impressing the importance of the FVA on clients. I can’t recall finding any work of this type done at clients before I arrive. I think this is because it does take work, and demand planners are busy doing other things. Because so many manual adjustments have to be made and because so many meetings are necessary with groups that provide forecasting input most demand planning departments seem overworked versus their staffing level.
Most of the forecasting consulting that comes before me is of a system focused nature. Adding characteristics to a view, creating new data cubes, that sort of thing. There seems to be a much smaller market for forecast input testing. It is something I bring to clients, but normally not something they ask for. Many decisions are still very much made based upon opinions and “feel.” In fact I find it very rare for the attribute/characteristics which is used to create a disaggregated forecast to have been proven to improve forecast accuracy before it is implemented in the system.
Mike: Anything else you’d like to say about FVA? Including advice for other companies considering the application of FVA?
Shaun: I have never seen any forecasting group that based its design upon FVA.
This is not to say that lip service may not be paid to FVA. If you bring up the topic, most people will tend to agree it makes sense. However, really using FVA means being very scientific in how one measures different forecast inputs, and while businesses use math, businesses are generally not particularly aligned with scientific approaches.
There are an insufficient number of people, either in companies or working as consultants that have an understanding of how to perform and document comparative studies. Documentation is a very important part of the process, and again this is a serious limitation for every company I have ever come into contact with, from the biggest to the smallest and the industry affiliation does not seem to matter very much in this regard.
On a different topic, as the literature points out and as I can certainly attest, there are some groups that have a negative interest in FVA. That is, some groups want to provide input to the forecast and don’t particularly care if they are right, and don’t particularly want to be measured. Some groups just want to ensure the in-stock position of their items. These groups are very powerful and exert great deal of pressure on the supply chain forecasting group to accept their forecasting input.
Further, this gets into the topic that there is not simply “one forecast.” There are really multiple forecasts, and while there is discussion of unifying the forecasts, this is not in reality an easy thing to do, because different groups have different financial and other incentives and see things through different lenses.
I would say poor quality forecasting or inputs to the forecast which are entirely unregulated as a policy (but regulated by individual planners to a degree) is the norm.
Willing to share your experiences with FVA? Please contact the IBF at info@ibf.org to arrange an interview for the blog series.