Measuring forecast accuracy

 alt=

Most engineers will tell you that:

You can’t optimize what you don’t measure

Turns out that forecasting is no exception. Measuring forecast accuracy is one of the few cornerstones of any forecasting technology.

A frequent misconception about accuracy measurement is that Lokad has to wait for the forecasts to become past, to finally compare the forecasts with what really happened.

Although, this approach works to some extend, it comes with severe drawbacks:

Measuring the accuracy of delivered forecasts is a tough piece of work for us. Accuracy measurement accounts for roughly half of the complexity of our forecasting technology: the more advance the forecasting technology, the greater the need for robust accuracy measurements.

In particular, Lokad returns the forecast accuracy associated to every single forecast that we deliver (for example, our Excel-addin reports forecast accuracy). The metric used for accuracy measurement is the MAPE (Mean Absolute Percentage Error).

In order to compute an estimated accuracy, Lokad proceeds (roughly) through cross-validation tuned for time-series forecasts. Cross-validation is simpler than it sounds. If we consider a weekly forecast 10 weeks ahead with 3 years (aka 150 weeks) of history, then the cross-validation looks like:

  1. Take the 1st week, forecast 10 weeks ahead, and compare results to original.
  2. Take the 2 first weeks, forecast 10 weeks ahead, and compare.
  3. Take the 3 first weeks, forecast 10 weeks ahead, and compare.

The process is rather tedious, as we end-up recomputing forecasts about 150 times for only 3 years of history. Obviously, cross-validation screams for automation, and there is little hope to go through such a process without computer support. Yet, computers typically cost less than business forecast errors, and Lokad relies on cloud computing to deliver such high-intensive computations.

Attempts to “simplify” the process outlined are very likely to end-up with overfitting problems. We suggest to say very careful, as overfitting isn’t a problem to be taken lightly. In doubts, stick to a complete cross-validation.


Reader Comments (1)

I am wanting to calculate forecast accuracy compared to sales where I have one column with actual sales figures and two other columns with forecasts. What I need to do is display the accuracy of each forecast against actual sales in terms of a percentage. Simply showing the percentage difference is not good enough (can be anywhere from -200% to +200% as our sales guys are rubbish at forecasting), I need to show the accuracy as a figure from 0% to 100%. 8 years ago | acekard 2i