Whitepaper on Out-of-Shelf Monitoring Technology released

Published on by Joannes Vermorel.

Few aspects of retailing are as fundamental as fully stacked shelves, and few concerns rank higher with an ever more demanding customer than product availability. Regardless, on-shelf availability remains a huge challenge for the industry. Growing, more dynamic product portfolios, staff and cost cuts combined with an increasingly complex supply chain have increased the challenge.

Cover of the OOS whitepaper by LokadWhile shelf availability is a top concern for customers and retailers alike, even the Tier I retailers today mostly rely on manual checks by store staff. This puts a large burden on employees, and response times to out-of-shelf situations are slow.

However, grocery retailers a crossed Europe and the US have started to explore technology that can help address the problem. With this in mind, we have published a whitepaper on out-of-shelf monitoring technology which gives an overview of

  • Objectives of out-of-shelf monitoring systems
  • Introduction to the technology
  • Definition of performance characteristics
  • Quantification of system capabilities and limitations

Are we missing some interesting aspects of this type of technology, or do you have experience with OOS monitoring systems you want to share? Please let us know in the comments.

Categories: business, insights, retail Tags: documentation oos shelfcheck whitepaper No Comments

ROI = Return On Inventory?

Published on by Joannes Vermorel.

ROI stands of course for return on investment. However, the idea that every euro or dollar ‘invested’ in inventory brings a certain return in terms of profits is a powerful thought. When looking at inventory from this angle, two questions arise:

  • Are profits earned by a product versus the capital invested in stocking it (i.e. ROI) similar over the product portfolio?
  • If the ROIs of the various products are heterogeneous – how can the overal ROI delivered by the inventory be maximized?

As you will have guessed, the answer to the first question is a clear ‘NO’. The return generated by a product for the euros invested in stocking it differs vastly from product to product. Two aspects have a major impact.

First, the gross margin of a product directly affects the ROI. This is obvious, and product managers naturally try to build a portfolio with high margins in mind. However, many other considerations come into play such as the coverage of the product portfolio for example. Adding more products to the portfolio is often also a strategy for growth.

The second aspect is more subtle, but equally important: The amount of stock that is required to provide a desired availability (i.e. service level) varies significantly among products. The main driver here is the volatility of demand - the higher the uncertainty of the future demand, the higher the inventory level required to assure a given service level.

As an example, lets imagine two products with the same gross margin that sell the same number of units during the year, thus generating the same amount of annual gross profit. However, product A sells the same amount each week with a high certainty, while product B is much more erratic with no sales for weeks and large orders in others. To achieve the same availability or service level for both products, the safety stock for product B will be a lot higher than for product A, where very little uncertainty and therefore safety stock requirement is given. As a result, product B requires much more units in stock than product A to achieve the same annual profit. The ROI for product A is therefore significantly higher.

Inventory is moneyBusinesses continually work on maximizing their return on capital employed (ROCE). Inventory is a significant part of the capital invested by most retailers, and therefore an important opportunity for optimization. The good news is that the potential for optimizing the ROI of your inventory by taking advantage of the heterogeneous ROIs across your catalog is large.

The latter is done by finding the optimal service level at which an SKU produces the best ROI within the constraint of a set inventory budget. As a result availability and revenue can be increased for a given inventory budget, or cash can be released from inventory while maintain the overall availability.

The analysis behind this optimization however is not trivial given the non-linear correlation between service level and inventory levels. Additionally, a set of ‘strategic’ constraints such availability goals for certain products need to be taken into account. 

This is a challenge which we plan to take on in the near future, the goal being a fully automated ROI optimization over the product portfolio for a given working capital sum. As an output, the system will determine service levels per SKU that give the highest availability (and therefore revenues) for a chosen inventory budget.

Excessive inventory reduces the return on invested capital, but too little inventory diminishes profits as well. The optimal inventory level is therefore a trade-off between stock-out cost and inventory cost, and falling too far on either side of the optimum will negatively impact your business. Our plans for service level optimization will leverage our forecasting technology. Stay tuned.

Categories: business, insights, roadmap Tags: insights inventory optimization roadmap roi working capital No Comments

Sparsity: when accuracy measure goes wrong

Published on by Joannes Vermorel.

Three years ago, we were publishing Overfitting: when accuracy measure goes wrong, however overfitting is far from being the only situation where simple accuracy measurements can be very misleading. Today, we focus on a very error-prone situation: intermittent demand which is typically encountered when looking at sales at the store level (or Ecommerce).

We believe that this single problem alone has prevented most retailers to move toward advance forecasting systems at the store level. As with most forecasting problems, it's subtle, it's counterintuitive, and some companies charge a lot to bring poor answers to the question.

Illustration of intermittent sales

The most popular error metrics in sales forecasting are the Mean Absolute Error (MAE) and the Mean Absolute Percentage Error (MAPE). As a general guideline, we suggest to stick with the MAE as the MAPE behaves very poorly whenever time-series are not smooth, that is, all the time, as far retailers are concerned. However, there are situations where MAE too behaves poorly. Low sales volumes fall in those situations.

Let's review the illustration here above. We have an item sold over 3 days. The number of unit sold over the first two days is zero. On the third day, one unit get sold. Let's assume that the demand is, in fact, of exactly 1 unit every 3 days. Technically speaking, it's a Poisson distribution with λ=1/3.

In the following, we compare two forecasting models:

  • a flat model M at 1/3 every day (the mean).
  • a flat model Z at zero every day.

As far inventory optmization is concerned, the model zero (Z) is downright harmfull. Assuming that safety stock analysis will be used to compute a reorder point, a zero forecast is very likely to produce a reorder point at zero too, causing frequent stockouts. An accuracy metric that would favor the model zero over more reasonable forecasts would be behaving rather poorly.

Let's review our two models against the MAPE (*) and the MAE.

  • M has a MAPE of 44%.
  • Z has a MAPE of 33%.
  • M has a MAE of 0.44.
  • Z has a MAE of 0.33.

(*) The classic definition of MAPE involves a division by zero when the actual value is zero. We assume here that the actual value is replaced by 1 when at zero. Alternatively, we could also have divided by the forecast (instead of the actual value), or use the sMAPE. Those changes make no difference: the conclusion of the discussion remains the same.

In conclusion, here, according to both the MAPE and the MAE, model zero prevails.

However, one might argue that this is simplistic situation, and it does not reflect the complexity of a real store. This is not entirely true. We have performed benchmarks over dozens of retail stores, and usually the winning model (according to MAE or MAPE) is the model zero - the model that returns always zero. Futhermore, this model typically wins by a comfortable margin over all the other models.

In practice, at store level, relying either on MAE or MAPE to evaluate the quality of forecasting models is asking for trouble: the metric favors models that return zeroes; the more zeroes, the better. This conclusion holds for about every single store we have analyzed so far (minus the few high volume items that do not suffer this problem).

Readers who are familar with accuracy metrics might propose to go instead for the Mean Square Error (MSE) which will not favor the model zero. This is true, however, MSE when applied to erratic data - and sales are store level are erratic - is not numerically stable. In practice, any outlier in the sales history will vastly skew the final results. This sort of problem is THE reason why statisticians have been working so hard on Robust statistics in the first place. No free lunch here.

How to assess store level forecasts then?

It took us a long, long time, to figure out a satifying solution to the problem of quantifying the accuracy of the forecasts at the store level. Back in 2011 and before, we were essentially cheating. Instead of looking at daily data points, when the sales data was too sparse, we were typically switching to weekly aggregates (or even to monthly aggregates for extremely sparse data). By switching to longer aggregation periods, we were artificially increasing sales volumes per period, hence making the MAE usable again.

The breakthrough came only a few months ago through quantiles. In essence, the enlightenment was: forget the forecasts, only reorder points matter. By trying to optimize our classic forecasts against metrics X, Y or Z, we were trying to solve the wrong problem.

Wait! Since reorder points are computed based on the forecasts, how could you say forecasts are irrelevant?

We are not saying that forecasts and forecast accuracy are irrelevant. However, we are stating that only the accuracy of the reorder points themselves matter. The forecast, or whatever other variable is used to compute reorder points, cannot be evaluated on its own. Only accuracy of the reorder points need and should be evaluated.

It turns out that a metric to assess reorder points exists: it's the pinball loss function, a function that has been known by statisticians for decades. Pinball loss is vastly superior not because of its mathematical properties, but simply because it fits the inventory tradeoff: too much stocks vs too much stockouts.

Categories: accuracy, retail, time series Tags: accuracy pinball sparse store No Comments

Video: Quantile Forecasts - Part 2

Published on by Joannes Vermorel.

Last week, we published the Quantile Forecasts - Part 1 video; here comes the Part 2. In the previous video, we discussed what quantiles were about. In short, it's a new way to look at the inventory optimization mechanism itself.

In Part 2, we provide some non-technical insights why quantile forecasts outperforms classic ones in three usual situations.

Video summary (7min46):

  • High service levels
  • Intermittent demand
  • Spiky demand

Don't hesitate to post questions in comments.

Categories: docs, insights, video Tags: insights quantiles video No Comments