The storyline is an old one. It was the theme of the 1983 American comedy titled Trading Places starring Dan Aykroyd and Eddie Murphy. You may remember it; It is one of my favorite funny movies in which an upper class commodities broker and a homeless street hustler switch roles when they are unknowingly made part of an elaborate bet.
It is an ageless theme where someone less fortunate trades places with a more fortunate person. As a child, I was enthralled as I saw it play out in Mark Twain’s Prince and the Pauper and Disney’s Parent Trap. While these are fictional stories, this week, I found a story where it happened in real life. Some of my favorite supply chain management leaders, organizations that I have worked with over the past seven years, had traded places in their organizational capabilities to forecast demand, and it was not a conscious choice.
Before I tell the story, let me share a quick perspective on benchmarking demand metrics. I have been working in this area for seven years. It is one of the hardest area of the supply chain to benchmark. I would like to take this opportunity to share my personal experience.
While companies eagerly want the data that benchmarking reports provide, benchmarking forecast accuracy is tricky. Why is it so hard? Let’s start with two major reasons:
- It’s Hard to get Apples to Apples. It is a Fruit Basket. The first reason that makes it tough to benchmark forecast accuracy is that every company does it differently. When doing this type of work, it is essential to have an “Apples to Apples” comparison. To do this, you need to look closely at five variables: frequency of planning, granularity of the planning or does the organization use monthly, weekly or daily planning, the construct of the data model, the input into the data model (E.g. shipments, orders, channel data), and the drivers of demand forecasting variance such as promotions, seasonal builds, etc. To get it right, the data must be scrubbed and normalized to ensure an “Apple to Apple” comparison. As a result, companies should never accept data from self-reported sources.
- The Apple doesn’t fall far from the Tree. The second reason that benchmarking forecast accuracy can be difficult is the fact that the data can be hard to get. To be useful, and since market conditions change, the data set needs to represent a like peer group from the same point in time. Since many companies have multiple supply chains, and competitors tend to not want to share data directly with their competitor, getting the data is quite a feat.
I ran into the, CEO of a major forecasting software developer earlier this week, and I was excited to find that he had just finished a project to benchmark demand data for consumer products companies to be deployed with his software solution. Five of the companies were organizations that I had benchmarked in 2003 and worked with over the past five years. While, neither he nor I can share the names of the companies, I would like to share my insights on their journey. It is truly a story of Trading Places. (In table 1, I have made up fictional names to hide the identity of the companies involved in the case study.)
While this story may not be as much fun as the original Trading Places movie, it is a real story where a focus on supply chain basics made a difference. In Table 1, I show the relative positions of the companies in the two analyses:
Table 1: Comparison of Five Consumer Products Companies Forecast Accuracy
|Monthly Forecasting at an Item/Ship From Level at a 30 Day Lag||2003Relative Ranking of Forecast Accuracy||2011Relative Ranking of Forecast Accuracy||Technology Used||Organizational: Regional vs. Global Focus|
|A||1||5||SAP APO||Matrix Organization with a change in Reporting through Go-to-Market Teams|
|B||2||2||SAP APO||Centralized with a Strong Focus on Analysis|
|C||3||4||JDA/Manugistics||Strong Regional Focus|
|D||4||1||JDA/Manugistics||Matrix’d Organization with Global Reporting through Supply Chain|
|E||5||3||SAP APO||Centralized with Strong IT/Line of Business Partnering|
Progress? For the group of companies that were benchmarked, the average monthly Mean Absolute Percentage Error (MAPE) for a one month lag was 31% + 12%. Data eight years ago for the same companies was an average of 36% + 10% MAPE. What was the result? This group of consumer products leaders has gotten slightly; but not significantly better in demand forecasting. They have weathered the storm of market changes that could have made the forecast much worse. The industry has experienced major shocks including shorter product lifecycles, product proliferation, higher levels of promotions, changes in competitive behavior, and global expansion.
Better Math? Consistent with other industry studies over the last ten years (E.g. IBF, AMR Research, etc); the data from the present study shows that we have not made progress to improve forecasting & planning processes in leaps and bounds through the use of statistical models. In the new benchmarking study, the use of statistical modeling software improved the forecast 3% on average (on a MAPE level with a 1 month lag) when compared to a naive forecast where this month’s volume planning is based on what was shipped last month. In the top quartile of customers, the impact was 2X or a 6% improvement in MAPE. A fact that was consistent in both studies showed that when the forecasting group reports to sales, the forecast bias is higher.
What is a 6% improvement in forecast accuracy worth? Based on AMR Research correlations, a 6% forecast improvement could improve the perfect order by 10% and deliver a 10-15% reduction in inventory. Slow moving items on the tail of the supply chain are most greatly impacted by this. Unfortunately, most companies let their supply chain tail whip them around.
It doesn’t just happen. Basics matter. For me, the interesting story can be found beneath the data. I am referring to the switch in position of the players over the course of the past eight years. In this time period, the best in Class Company from 2003 became the worst performer and two lowest performers propelled themselves upward. As I thought about why, and recounted my many experiences with these companies, several ideas came to mind:
- Moving down. The company that showed the worst performance in current benchmarking and the best performance in 2003 had a very high bias. Why do you think this is the case? The company made a decision shortly after the benchmarking in 2003 to have the forecasting group report through sales where there was a pervasive belief in the organization that if the company over-forecasted that sales would be higher. This decision increased bias and cast a cloud over the forecasting & planning process. The lack of a “true North” in the organization became a stumbling block to improving forecast accuracy.
- Moving up. The companies that moved up in the analysis, focused hard on the basics. This included efforts to clean data, frequently tune supply chain planning software, a strong corporate demand planning team that reports through supply chain and the use of the statistics.
Thoughts on tactical forecasting: While technology vendors like to brag that the use of their technology will make a difference in supply chain leadership, the data here is inconclusive to that point. Instead, what made a difference in relative position was the process, data, and organizational reporting. I know this may not be the sexy stuff, but the basics matter.
Wrapping it Up
I commend this software developer for spending the energy and the manpower to benchmark their client base. This type of commitment to ones client base differentiates and creates long-term relationships. It is my hope that this type of analysis will be able to be part of continuous efforts for supply chain leaders.
I look forward to sharing these journeys and many other lessons that I have learned from my experience at IBF’s Demand Planning & Forecasting: Best Practices Conference w/ Demand Management Forum in Dallas, Texas USA this coming May 2011.
Please let me know your thoughts!
Hear Lora Speak At: