Dr. Jain Answers your S&OP and Forecasting Questions — July 2016

We regularly receive S&OP and Forecasting questions from the IBF Membership base that are answered in the Journal of Business Forecasting (JBF). I have identified 5 common questions with my responses below.

In your judgment, should we centralize the forecasting function?

I am more inclined toward centralization. If it is centralized, the department will be independent, and thus forecasts will be unbiased. There will be a single point of contact; if anyone needs a forecast, he or she will know where to go. Further, it will be easier to develop consensus, and the forecasting department will be highly accountable. If the forecasting function is decentralized, everyone will do their own thing. It will be difficult to get input from others, which is essential in preparing forecasts. There will be multiple forecasts, making it difficult to align supply with demand. In addition, in a decentralized environment, if resources are needed somewhere, there is a temptation to drop some of the people from the forecasting staff. Further, if someone leaves, there won’t be anyone to train the newcomer.

How much inventory should we hold so that we have the right items, at the right place, and at the right time?

It all depends on whether the product is high in value or low in value, whether the product is highly predictable or difficult to predict, what its production cost is, and what its lead time is. You may like to hold more inventory on products that are high in value, but difficult to forecast. On high-value products, you don’t want to lose any orders, so it may be better to hold a little more inventory for them. On low-value products, if the cost of production goes down significantly when produced in larger lots, you may like to hold more inventory. Lead time also makes a difference. You may like to hold a little more inventory of products with longer lead times. There are some formulas available that are used to determine safety stocks. Calculation is generally based on things such as forecast errors, customer service, and lead time. Based on the 2015 survey by the Institute of Business Forecasting, when all industries are combined, businesses hold 37 days of inventory of finished goods, and 27 days of raw material.

What are the pros and cons of forecasting at an aggregate level (total U.S.) versus DC level? Which is more common and does forecasting at a DC level require additional headcount?

At which level we should forecast depends on the objective. If the objective is to determine how much the total sales would be for the whole year, we should be forecasting at an aggregate level. This is good for strategic planning, but not for operational planning, where most of the forecasts are used. For production planning, you would need forecasts at a most granular level. You would need forecasts not only at a DC level but also at a SKU level. How many forecasters required would depend on the number of things, including how many forecasts have to be prepared and what level, and how difficult is to forecast those products. All products are not equally forecastable.

New products are difficult to forecast, but their number is increasing every year. How should we handle them in the S&OP process?

There is no question that the role of new products is on the rise. According to the recent survey by the Institute of Business Forecasting, about 22% of sales revenue now comes from new products. Therefore, they need special attention. The best way is to split new products into two sets: one group may include products that are relatively easy to forecast. These include new products resulting from line extension and product improvement. The low-value new products can also be lumped in that group. The second group includes products that are new to the company and to the world, and the ones that result from market extension. The products of first group can be handled as a part of the regular S&OP process. However, products of second group require special attention. For that, we need a separate team that generates forecasts and reviews their performance on a weekly cycle, and then develops a course of action. The team should have people who have the power to make decisions so that decisions can be implemented right away.

In measuring forecast accuracy and bias, should we keep the same denominator?

We should have the same denominator, which is actual, whether we measure forecast error or bias. Otherwise, we will be looking at the forecast error from one perspective, and bias from another. We use actual as a denominator in measuring forecast accuracy because we want to see how the forecast deviates from the actual, not how the actual deviates from the forecast. The same is true with bias.

If you have an S&OP, Demand Planning, and Forecasting question you would like answered, please send your questions to jainc@stjohns.edu. Your comments on all are welcome.

Meet Dr. Jain at IBF Academy 2016 in Las Vegas, where he will be leading the session on Forecasting Accuracy and Metrics.

IBF’s Journal of Business Forecasting (JBF) | St. John’s University – New York USA
Chief Editor | Professor
Dr. Jain is Professor of Economics at St. John's University based in New York USA, where he mainly teaches a graduate course on business forecasting. He is also Chief Editor of the IBF's Journal of Business Forecasting. He has written over 100 articles, mostly in the area of forecasting and planning, and has authored/co-authored/edited nine books, seven in the area of forecasting and planning. His new book, "Fundamentals of Demand Planning and Forecasting," is the basis of IBF's body of knowledge. In a consulting capacity, he has worked for many large multinational companies including Hewlett Packard, Union Fidelity Life Insurance Company, Prince Manufacturing, CECO Doors, and Taylor Made Golf. He has conducted workshops on business forecasting and planning for various organizations including Sweetheart Cup, Eastman Kodak, Jockey International, SABIC, Saudi Aramco, DU-Emirates Integrated Telecommunications Co. -Dubai UAE, and Symbios Consulting Group-Egypt, Goody-Saudi Arabia, Al-Nahdi Medical. He has made presentations on business forecasting and planning at IBF conferences / workshops, Council of Supply Chain Management, Informs, DMDNY in New York, John Galt Solutions and SAS. He has been invited by various institutions to speak on business forecasting & planning including University Technology Malaysia, School of Future Studies & Planning, Devi Ahilya University, India, and Apeejay Svran Institute of Management, India. He is the recipient of 1994 award of the Direct Marketing Educational Foundation for his best paper.

4 Responses to Dr. Jain Answers your S&OP and Forecasting Questions — July 2016

  1. Dr. Jain,
    As to the Centralized question. Is it not the function of the S&OP team to sort through the different needs of the silo functions, which would include the different forecast needs. Thus, is not S&OP functioning as a Centralized Forecasting function with DE-centralized participants from throughout the company?

  2. The centralization answer may be appropriate for the forecasting per se. Nevertheless, forecasts need to be converted into sales plans. The difference is that forecasts try to foresee what the market will demand, under certain assumptions. Sales plans state what the company commits to sell. Failure in recognizing the difference is at the root of many of the problems. Sales plans can be centrally coordinated but the accountability for the sales plans must be strongly put on the people who will execute the plans. If these people are decentralized in different business lines or market segments, the accountability will have to be also decentralized, no matter how centrally the process is coordinated.
    Sales plans may become different than initial forecasts, not only because of short supply but also because of strategic or tactical decisions, e.g., to change prices, advertising and promotions.
    Once forecasts are adjusted to become sales plans, it makes no sense to measure actuals against initial forecasts, but to sales plans.
    Some try to catch that by using the term “consensus forecasts”. The problem is that “forecast” implies much less determination to make happen than “sales plans”.
    It is important to emphasize that the accuracy of the forecasts (better yet sales plans) is only half dependent upon the quality of the forecasting process. The other half depends on the quality of the execution process. Therefore, a central forecasting function cannot be accountable for more than half the way and cannot centrally resolve the deviations.
    This is another major cause of frustration and failure in “improving forecasts”, as the root causes are many times predominantly in the execution arena.

    S&OP is, in many companies, confused with the lower level, SKU planning. S&OP should be focused on medium term, family level planning. At this level, it should make the final decision on the sales plans. Forecasts should be an input from the forecasting / marketing and sales functions, treated through the Demand Review step of S&OP.
    This level of forecasts/sales plans will determine the family level, medium term production plans. The S&OP should not be concerned with SKU level, short term forecasts/sales plans, as long as they reconcile, when aggregate, to the family level plans.
    There should be a master planning or equivalent function and processes, below the S&OP, to support that.

    New products, ‘regular’ or ‘special’, simple or complicated, should be reviewed in the S&OP process. The Oliver Wight model is, in this sense, much more appropriate than Wallace’s, defining as step 1 the Product Review. The Demand Review is, in OW’s model, step 2, which makes sense: one can only determine the proposed sales plans after understanding what will happen with the product portfolio: additions, changes and discontinuations.
    But the S&OP cannot be made into the process to manage the products or the product portfolio. The S&OP is only a monthly review, decision and aggregate planning process, taking inputs from the regular management processes, and returning the decisions and plans made to those same processes for detailing and execution.

    Lots of confusions around the S&OP, unfortunately.

  3. Earl, you are right, it is not the function of the S&OP team to go over forecasting needs of different functions. Forecasting is one of the components or steps to be followed, without which the S&OP process won’t function. To have the best forecasts, it needs to do two things. One, prepare statistical forecasts, and then Two, through a consensus process, overlay judgment over statistical forecasts to account for information which cannot be quantified and for information that was not available at that time. What I am saying is that statistical forecasts should be centralized, not the S&OP process.

  4. Daniel,

    To understand my viewpoint on centralization, it is important to recognize what is the best way to prepare forecasts. To me, it is a two-step process. One, prepare baseline forecasts, which are statistical forecasts, and two, through a consensus process, overlay judgment over baseline numbers, to account for information that cannot be quantified and/or for information that was not available at the time baseline forecasts were generated. Although the Marketing plan affects forecasts, and forecasts affect the marketing plan, but, ultimately, for practical purposes, we have to come up with forecasts that are based on the final Marketing plan. The centralized team has to obtain input from Sales and Marketing before generating statistical forecasts, so the impact of any change in plan about price, advertisement and number of products to be launched or delisted is in the statistical forecasts. So, we don’t need to convert forecasts into a marketing plan. Believe me, even statistical forecasts can be biased. Since the centralized team will be as impartial as possible, forecasts will be pretty much unbiased. As far accountability goes, if they are finalized through a consensus process, the consensus team should be accountable. Sales, as a member of the consensus team, are committed to those numbers. You are right that S&OP should be responsible for Family level forecasts, not SKU level forecasts.

    You raised a very interesting point about new products. They should certainly be reviewed as a part of S&OP process. But I will go one step further to say that we need a separate team, within S&OP, to review new products, and meet more often than monthly, maybe weekly. The reason why I am saying this is now a large percent of sales (23%, based on the recent IBF survey) come from new products, and thus require special attention. Plus, their life cycle is getting shorter and shorter. There are number of new products that present problems but are fixable. The sooner we take care of them, the better are chances of their success. Some new products offer new opportunities, as well. Again, the sooner we recognize and tap into, the better it would be. (Whatever the decision is made by this team should be communicated to the S&OP team so that it can incorporate in its plan.) Take the example of New Coke, which was launched in 1985, it took 77 days to bring back the old Coke. Looks like Pepsi did not learn a lesson from it. In August 2015, in an effort to reverse the declining sales of Diet Pepsi, they replaced their sweetener ingredient aspartame with sucralose. Of course, this did not help sales and turned off customers. Despite, all the early warnings, it decided to bring back the old Diet Pepsi about one year later. If there were a separate New Products planning team that monitored performance closely, it would have acted much earlier.

Leave a Reply

Your email address will not be published. Required fields are marked *

WordPress Anti Spam by WP-SpamShield