The integration of Artificial Intelligence and Machine Learning into the S&OP process allows companies to become more agile in response to changes in demand. Data sources that previously were hard to analyze due to complexity or sheer volume are becoming standard planning inputs.

For example, using these tools, demand planners can now use data from sources such as online reviews, competitors’ advertising, and even Tweets.

With all that AI/ML can do to enhance the planning process, does this mean that the current role of demand planning is doomed? Should those of us who are currently working in the demand planning arena begin looking for new jobs?

Not so fast. While AI and ML offer our planning processes new and powerful ways of managing inputs to demand, they also have some significant limitations. And I believe human Demand Planners will be required to ensure that AI and ML are truly effective in the planning process. To better understand how these technologies and human Demand Planners can complement each other, let’s begin by looking at some of the often-overlooked limitations of these tools.

1. Patterns, Patterns Everywhere

The primary way that these tools can assist us in planning is by finding patterns in large amounts of complex data. As companies gather more and more data about their customers and their businesses, the quantity of data that they need to analyze to make good decisions becomes more than human Demand Planners can manage. By training AI/ML systems to find patterns that are buried in these mountains of data, companies can exploit data that was previously inaccessible due to volume or complexity.

But patterns can be a trap. Just because a customer bought a large quantity of product every September for the last 6 years does not mean that this will happen again this year. We still need to assess other factors that might influence this pattern such as pricing, product features, and competitive products. So, while AI/ML are good at indicating that these sales might recur, it will take some human input and research to determine if it makes sense to bet on it happening again this year. In time we may be able to train these systems to incorporate these potential sources of input; but in the meantime, we will need human insights to fill in these gaps.

2. Intelligence Vs Common Sense

Where human beings can infer things from common sense, AI/ML can be stumped. Look at this scenario: A man went to a restaurant. He ordered a steak. He left a big tip. If I asked a friend what this man ate, he would say a steak. But most AI/ML systems would struggle to get this answer because nothing in these statements explicitly describes what the man ate, only what he ordered. From experience, we know that what we order in a restaurant is usually also what we eat. This sort of common-sense extrapolation based on context is difficult for AI/ML systems.

In a planning scenario this can be a major problem. For example, a customer always orders a large quantity of a certain product at Thanksgiving, and later returns about 20% of the product since it has not sold. Does this mean that the customer doesn’t understand how to purchase product correctly? Or does it mean that they need excess quantities to ensure that their displays are full throughout the selling season? While AI/ML can’t answer these questions, human planners can contact the customer and asses what the real issue might be.

3. AI Has Limited Adaptation

One of the strengths of human intelligence is that the human mind can easily adapt to new information. If I tell you that a customer just went bankrupt, from experience you will know what impact this might have on your business. You can quickly adjust your processes to accommodate this change. AI/ML can’t react that quickly. These systems would need to be retrained to know what to do in this situation.

And since each situation would be slightly different, any training provided for one scenario would have only limited application to later ones. These systems need ongoing training to be truly agile and adaptive.

4. AI Has No Understanding Of Cause & Effect

Humans from experience instinctively understand cause and effect. If I drop a glass on a hard floor it will shatter. But my dropping the glass does not cause it to shatter. The glass striking the floor causes this. Here’s another example: We know from experience that roosters crow when the sun rises. AI/ML have no trouble understanding this relationship. But if we ask whether the rooster’s crowing causes the sun to rise or vice versa, these systems are stumped.

Using the customer bankruptcy example above, from our experience we can usually easily assess the possible causes: lack of sales, high expenses, loss of funding, better competition, etc. Our experience allows us to make these mental jumps easily. However, without extensive training, AI/ML systems would struggle to relate the causes to the effects.

5. AI Lacks Ethics

AI/ML systems will reflect the biases and perspectives of the humans that trained them. They can’t tell right from wrong. Programming these systems to reflect the complexity of human values and how these adapt to different and changing situations is extremely difficult. Therefore, allowing them to make certain types of decisions can be dangerous.

For example, in planning how much credit to extend to a customer, we can train a system to analyze the business factors that make a customer a good or a bad business risk. But these systems can’t tell us how well these customers manage their environmental or societal impact. If these factors are important to our decision, we need human input.

Complementary, Not Competitive

With the limitations of AI/ML that we have discussed here, it might seem that these tools are less useful that we might at first have thought. The truth is that they are extremely useful when we are dealing with large amounts of data, and where we have the time, skill, and resources to train them properly. What they lack is what human planners can provide. In the best case I believe the combination of properly trained AI/ML systems and experienced Demand Planners can be extremely effective in drawing out all the insights hidden in the data.

To make the most of this relationship, demand planners will need to develop some new skills. While we can leave a large part of the data analysis to systems, human insights based on broad experience will be required if we are to make the most of the analysis these systems provide. Additional human soft skills such as relationship-building, listening, innovating, and thinking strategically — together with the input of AI — can make our planning both more agile and more effective.

 

This article originally appeared in the Fall 2021 issue of The Journal of Business Forecasting. Become an IBF member and get the Journal delivered to your door quarterly, plus discounted entry to IBF conferences and events, members only tutorials and workshops, access to the entire IBF knowledge library and more. Get your membership