Without KPIs, it is impossible to improve forecast accuracy. Here are 8 highly effective metrics that allow you to track your forecast performance, complete with their formulas.

Forecast Accuracy

This KPI is absolutely critical because the more accurate your forecasts, the more profit the company makes and the lower your operational costs. We choose a particular forecasting method because we think it will work reasonably well and generate promising forecasts but we must expect that there will be error in our forecasts. This error is a function of the time difference between the actual value (Dt) and the forecast value (Ft) for that period. It is measured as:

 Forecast Accuracy: 1 – [ABS (Dt – Ft) / Dt]

Where,

Dt: The actual observation or sales for period t

Ft: The forecast for period t

Our focus on this KPI is to provide insights about forecasting accuracy benchmarks for groups of SKUs rather than identifying the most appropriate forecasting methods. For example, achieving 70-80% forecast accuracy for a newly-launched and promotion-driven product would be a good considering we have no sales history to work from.

SKUs with medium forecastability (volatile, seasonal, and fast-moving SKUs) are not easy to forecast owing to seasonal factors like holidays and uncontrollable factors like weather and competitors’ promotions etc., their benchmark is not recommended to be less than 90-95%.

Tracking Signals

Tracking signals (TS) quantify bias in a forecast and help demand planners to understand whether the forecasting model works well or not. TS in each period is calculated:

 TS: (Dt- Ft) / ABS (Dt – Ft)

Where,

Dt: The actual observation or sales for period t

Ft: The forecast for period t

Once it is calculated, for each period, the numbers are added to calculate the overall TS. When a forecast, for instance, is generated by considering the last 24 observations, a forecast history totally void of bias will return a value of zero. The worst possible result would return either +24 (under-forecast) or -24 (over-forecast). Generally speaking such a forecast history returning a value greater than (+ 4.5) or less than (-4.5) would be considered out of control. Therefore, without considering the forecastability of SKUs, the benchmark of TS needs to be between (-4.5) and (4.5).

Bias

Bias, also known as Mean Forecast Error, is the tendency for forecast error to be persistent in one direction. The quickest way of improving forecast accuracy is to track bias. If the bias of the forecasting method is zero, it means that there is an absence of bias. Negative bias values reveal a tendency to over-forecast while positive values indicate a tendency to under-forecast. Over the period of 24 observations, if bias is greater than four (+4), forecast is considered to be biased towards under-forecasting. Likewise, if bias is less than minus four (- 4), it can be said that the forecast is biased towards over-forecasting. In the end, the aim of the planner is to minimize bias. The formula is as follows:

Bias:  [∑ (Dt – Ft)] / n

Where,

Dt: The actual observation or sales for period t

Ft: The forecast for period t

n: The number of forecast errors

Forecaster bias appears when forecast error is in one direction for all items, i.e they are consistently over- or under-forecasted. It is a subjective bias due to people to building unnecessary forecast safeguards like increasing the forecast to match sales targets or division goals.

By considering the forecastability level of SKUs, the bias of low forecastability SKUs bias can be between (-30) and (30). When it comes to medium forecastability SKUs, since their accuracy is expected to be between 90-95%, bias should not be less than (-10) nor greater than (+10). Regarding high forecastability SKUs, due to their moderate contribution to the total, bias is not expected to be less than (-20) or greater than (20). The less bias there is in a forecast, the better the forecast accuracy, which allows us to reduce inventory levels.

Mean Absolute Deviation (MAD)

MAD is a KPI that measures forecast accuracy by averaging the magnitudes of the forecast errors. It uses the absolute values of the forecast errors in order to avoid positive and negative values cancelling out when added up together. Its formula is as follows:

MAD: ∑ |Et| / n

Where,

Et: the forecast error for period t

n: The number of forecast errors

MAD does not have specific benchmark criteria to check the accuracy, but the smaller the MAD value, the higher the forecast accuracy. Comparing the MAD values of different forecasting methods reveals which method is most accurate.

Mean Square Error (MSE)

MSE evaluates forecast performance by averaging the squares of the forecast errors, removing all negative terms before the values are added up. The squares of the errors achieves the same outcome because we use the absolute values of the errors, as the square of a number will always result in a non-negative value. Its formula is as follows:

MSE: ∑(Et)² / n

Where,

Et: forecast error for period t

n: the number of forecast errors

 

Similar to MAD, MSE does not have a specific benchmark to check accuracy but the smaller value of MSE, the better forecast model, which means more accurate forecasts. The advantage of MSE is that it squares forecast errors, giving more weight to large forecast errors.

Mean Absolute Percentage Error (MAPE)

MAPE is expressed as a percentage of relative error. MAPE expresses each forecast error (Et) value as a % of the corresponding actual observation (Dt). Its formula is as follows:

MAPE: ∑ |Et / Dt |/n * 100

Where,

Dt: Actual observation or sales for period t

Et: the forecast error for period t

n: the number of forecast errors

Since the result of MAPE is expressed as a percentage, it is understood much more easily compared to other techniques. The advantage of MAPE is that it relates each forecast error to its actual observation. However, series that have a very high MAPE may distort the average MAPE. To avoid this problem, SMAPE is offered which is addressed below.

Symmetrical Mean Absolute Percentage Error (SMAPE)

SMAPE is an alternative to MAPE when having zero and near-zero observations. Low volume observations mostly cause high error rates and skew the overall error rate, which can be misleading. To address this problem, SMAPE come in handy. SMAPE has a lower bound of 0% and an upper bound of 200%. It does not treat over-forecast and under-forecast equally. Its formula is as follows:

SMAPE: 2/n * ∑ | (Ft – Dt) / (Ft + Dt)|

Where,

Dt: Actual observation or sales for period t

Ft: the forecast for period t

n: the number of forecast errors

Similar to other models, there is no specific benchmark criteria for SMAPE. The lower the SMAPE value, the more accurate the forecast.

Weighted Mean Absolute Percentage Error (WMAPE)

WMAPE is the improved version of MAPE. Whilst MAPE is a volume-weighted technique, WMAPE is more value-weighted. When generating forecasts for high value items at the category, brand, or business level, MAPE cancels plus and minus values. WMAPE, however, weights both forecast errors and actual observations (sales). When considered at the brand level, high value items will influence overall error because they are highly correlated with safety stock requirements and development of safety stock strategies. Its formula is as follows:

WMAPE: ∑(|Dt-Ft|) / ∑(Dt)

Where,

Dt: The actual observation for period t

Ft: the forecast for period t

Like other techniques, WMAPE does not have any specific benchmark. The smaller the WMAPE value, the more reliable the forecast.