Filtering by Tag: optimization

Promotion planning in general merchandize retail - Optimization challenges

Published on by Joannes Vermorel.

So far, we covered data challenges and process challenges in the context of promotional forecasts. In this post, the last of the series, we cover the very notion of quantitative optimization when considering promotions. Indeed, the choice of the methodological framework that is used to produce the promotion forecasts and measure their quantitative performance is critically important and yet usually (almost) completely dismissed.

As the old saying goes, there is no optimization without measurement. Yet, in case of promotions, what are you actually measuring?

Quantifying the performance of promotions

The most advance predictive statistics remain rather dumb in the sense that it’s nothing but the minimization of some mathematical error function. As a consequence, if the error function is not deeply aligned with the business, there is no improvement possible, because the measure of the improvement itself is off.

It doesn’t matter to be able to move faster as long you don’t even know if you’re moving in the right direction.

When it comes to promotions, it’s not just the plain usual inventory economic forces:

  • inventory costs money; however, compared to permanent inventory, it can cost more money if the goods are not usually sold in the store, because any left-over after the end of the promotion will clutter the shelves.
  • promotions are an opportunity to increase your market shares, but typically at the expense of the retailer's margin; a key profitability driver is the stickiness of the impulse given to customers.
  • promotions are negotiated rather than merely planned; a better negotiation with the supplier can yield more profits than a better planning.

All those forces need to be accounted for quantitatively; and here lies the great difficulty: nobody wants to be quantitatively responsible for a process as erratic and uncertain as promotions. Yet, without quantitative accountability, it’s unclear whether a given promotion creates any value, and if it does, what can be improved for the next round.

A quantitative assessment requires a somewhat holistic measure, starting with the negotiation with the supplier, and ending with the far reaching consequences of imperfect inventory allocation at the store level.

Toward risk analysis with quantiles

Holistic measurements, while being desirable, are typically out of reach for most retail organizations that rely on median forecasts to produce the promotion planning. Indeed, median forecasts are implicitly equivalent to minimizing the Mean Absolute Error (MAE), which without being wrong, remains the archetype of the metric strictly agnostic of all economic forces in presence.

But how could improving the MAE be wrong? As usual, statistics are deceptive. Let’s consider a relatively erratic promoted item to be sold in 100 stores. The stores are assumed to be similar, and the item has 1/3 chances of facing a demand of 6 units, and 2/3 of facing a demand of zero unit. The best median forecast is here zero units. Indeed, 2 units per store would not be the best median forecast, but the best mean forecasts, that is, the forecast that minimizes the MSE (Mean Square Error). Obviously, forecasting a zero demand across all stores is buggy. Here, this example illustrates how MAE can extensively mismatch business forces. MSE show similar dysfunctions in other situations. There is no free lunch, you can't get a metric that is both ignorant of business and aligned with the business.

Quantile forecasts represent a first step in producing more reasonable results for promotion forecasts because it becomes possible to perform risk analysis, addressing questions such as:

  • In the upper 90% best case, how many stores will face a stock-out before the end of the promotion?
  • In the lower 10% worst case, how many stores will be left with more than 2 months of inventory?

The design of the promotion can be decomposed as a risk analysis, integrating economic forces, sitting on top of quantile forecasts. From a practical viewpoint, the method has the considerable advantage of preserving a forecast strictly decoupled from the risk analysis, with is an immense simplification as far the statistical analysis is concerned.

Couple both pricing and demand analysis

While a quantitative risk analysis already outperforms a plain median forecast, it remains relatively limited by design in its capacity to reflect the supplier negotiation forces.

Indeed, a retailer could be tempted to regenerate the promotion forecasts many time, varying the promotional conditions to reflect the scenarios negotiated with the supplier, however such a usage of the forecasting system would lead to significant overfitting.

Simply put, if a forecasting system is repeatedly used to seek the maximization of a function built on top of the forecasts, i.e. finding the best promotional plan considering the forecasted demand, then, the most extreme value produced by the system is very likely to be a statistical fluke.

Thus, instead the optimization process needs to be integrated into the system, analyzing at once both the demand elasticity and the supplier varying conditions, i.e. the bigger the deal, the more favorable the supplier conditions.

Obviously, designing such a system is vastly more complicated than plain median promotion forecasting system. However, not striving to implement such a system in any large retail network can be seen as a streetlight effect.

A policeman sees a drunk man searching for something under a streetlight and asks what the drunk has lost. He says he lost his keys and they both look under the streetlight together. After a few minutes the policeman asks if he is sure he lost them here, and the drunk replies, no, that he lost them in the park. The policeman asks why he is searching here, and the drunk replies, "this is where the light is".

The packaged technology of Lokad offers limited support to handle promotions, but this is an area that we address extensively with several large retailers, albeit in a more ad hoc fashion. Don’t hesitate to contact us, we can help.

Categories: insights, forecasting Tags: promotion forecasting optimization insights No Comments

Optimizing inventory with kits or bundles

Published on by Joannes Vermorel.

Merchants are frequently selling kits (or bundles), where several items are sold together, while the possibility remains to buy the items separately. The existence of kits further complicates inventory optimization because it introduces dependencies between items as far availability is concern. In this post, we try to shed some lights about optimizing inventory in presence of kits.

There are two opposed approaches to deal with kits:

  • Do not store any kits, only separate items. Assemble the kits at the last moment assuming that all items are available.
  • Store all kits pre-assembled as a separate SKUs. Kits are assembled in advance. If no kit is readily available, the kit is considered as out-of-stock.

In practice, most inventory policies toward kits tend to be a mix of those two approaches.

Let’s start the review with the first approach. The primary benefit of keeping everything disassembled is that it maximizes the availability of the separate items; however, this comes at the expense of the kit availability.

Indeed, assuming the availability levels of items are independent and refered with L1, L2, … Lk (for a kit with k items), then the availability of the kit LK = L1 x L2 x … x Lk

Let’s assume that we have a kit with 5 items, all items having the same service level. The graph above illustrates the correspondence between the service level of the kit w/o of the service levels of the separate items.

For example, with 5 items at 90% service level, the kit ends up with a service level slightly below 60%. This behavior illustrates weakest link behavior of kits: it only takes one item to be out-of-stock to put the whole kit out-of-stock. Even if all items have fairly high availability, the kit availability can be much lower; and the bigger the kit, the worse it gets. If instead of 5 items, we consider a kit with 10 items at 90% service level, then the kit service availability is reduced to 35%; which is typically unacceptable for most businesses.

The second approach consists of storing pre-assembled kits. This approach maximizes the availability of kits. In this case, kits are treated as like any other item: the demand for kits is forecast, with quantiles forecasts, and a reorder point is computed for the SKU representing the kits. This inventory policy preserves a strict decoupling of the kit and its items.

With this approach, the service level of the kit is driven by the quantile calculation. As such, the kit is not negatively impacted by the separate availability of the items. Each item also gets its separate reorder point.

The primary drawback of this second approach is that, in the worst case, the amount of inventory can be doubled for limited or no extra availability. In practice however, assuming that about half of the item consumption comes for kit’s sales, the stock is typically increased by roughly 50% when applying this second approach instead of the first one; the extra inventory is used to ensure the high level of availability of the kit itself.

The optimal inventory strategy, the one that maximizes the ROI (Return On Inventory), is usually a mix of those two approaches.

The exact inventory optimization of kits is a relatively intricate problem, however the problem could be rephrased as: at which point should the merchant start refusing to sell separately one of the kit’s items because she would risk losing more advantageous orders on kits instead?

Indeed, all long as kits are available, there is typically no incentive for the merchant to refuse selling a kit in order to preserve the availability of the separate items. (There might be an incentive if items have much higher gross margin than the kit, but for the sake of simplicity, this case is beyond the scope of the present discussion).

In order to determine how many items should be preserved for kits (assembled or not), one can use an alternative quantile forecasts, where the service level is not set as a desired availability target, but on a much lower probability that reflects a probable sales volume that should be preserved.

For example, let's assume that a 30% service level on a kit gives a quantile forecast at 5. This value can be interpreted as “there are 70% chances that 5 or more units of the kits will be sold over the duration of the lead time”. If a 70% confidence in selling 5 kits outweighs the benefits of selling the next item now (assuming only 5 items remain), then the item should be considered as reserved for kitting purposes.

We are still only scratching the surface as far kits are concerned. Don’t hesitate to post your question in comments.

Categories: insights, safety stock Tags: inventory optimization 2 Comments

Phantomscan, get rid of phantom inventory

Published on by Joannes Vermorel.

Delivering more accurate demand forecasts has been the goal of Lokad since its creation. However, no matter how good are the forecasts, if underlying inventory records are inaccurate, the whole inventory optimization, as delivered by Salescast, is off.

A bit more than one year ago, we were launching Shelfcheck, a tool to detect out-of-shelf (OOS) problems based on the analysis of improbable sales drops. However, Shelfcheck is only palliative care as OOS alerts are produced after the beginning of the stockout problem.


Today, we announce the launch of Phantomscan, a new webapp that help retailers getting rid of their phantom inventory. In short, by analyzing patterns observed in past inventory corrections, Phantomscan predicts which SKUs are most likely to have inaccurate records. Instead of having employees performing a classic cycle count, they focus directly where counting is needed the most.

There are not that many empirical studies that have been conducted about inventory accuracy in retail, however the few that exist are stunning: in all studies, inventory records have been found to wildly inaccurate at the store level.

Although computerized tracking of inventory at the stock keeping unit (SKU) level is commonly assumed to be accurate, we found discrepancies in 65% of the nearly 370,000 inventory records we gathered from multiple stores of a leading supply chain. DeHoratius and Raman (2004)

The problem is serious because phantom inventory acts as an invisible hand lowering service levels on all SKUs impacted. Yet, it seems that the only option remains cycle counting which happens to be a very expensive way to improve inventory accuracy.

Phantomscan, in contrast, is designed to make the most of each minute spent on counting inventory. It's a curative care against OOS, helping the retailer to remove inaccuracies before OOS problems emerge. With an aggressive per-store pricing, we believe that Phantomscan will be suitable for retail companies of any size.

At this point, we are looking for volunteers to take part in Phantomscan beta. Early adopters will benefit from Phantomscan free of charge for the duration of the beta, plus 6 months after the end of the beta. Furthermore, beta users will also get the chance to influence the development of the webapp toward the features that serve them the most. 

To take part in the beta, just drop a line to

Categories: accuracy, on shelf availability, supply chain Tags: inventory optimization phantom No Comments

Refreshing Min/Max inventory planning

Published on by Joannes Vermorel.

Modeling inventory replenishments

Min/Max inventory planning has been available for decades. Yet, some people argue that Min/Max drive higher costs and that it should be replaced with other methods

Before jumping on conclusions, let’s try to clarify a bit the situation first. For a given SKU (Stock Keeping Unit), the inventory manager needs only two values to specify his inventory management policy:

  • A threshold, named reorder point, which defines if any reorder should be made (Point 3 in the schema).
  • A quantity, named reorder quantity, to be reordered, if any (Point 1 in the schema).

The Min/Max system simply states that:

MIN = ReorderPoint
MAX = ReorderQuantity + InventoryOnHand  + InventoryOnOrder

Thus, as long you’re not carving in stone your Min & Max values, the Min/Max system is perfectly generic: it can express any reorder policy. As far inventory optimization is concerned, adopting the Min/Max convention is neutral, as it’s just way to express your replenishment policy. Contrary to what people seem to believe, Min/Max does neither define nor prevent any inventory optimization strategy.

What about LSSC and Min/Max?

Let’s see how our Safety Stock Calculator can be plugged into a Min/Max framework. The goal is to update the Min & Max values to optimize the inventory based on the forecasts delivered by Lokad.

The calculator reports reorder points. Thus, handling MIN values is rather straightforward since MIN = ReorderPoint. The calculator even lets you export reorder points directly into any 3rd party database. Yet, MAX values are slightly more complicated. The MAX definition states that:

MAX = ReorderQuantity + InventoryOnHand  + InventoryOnOrder

Let’s start with the ReorderQuantity. The safety stock analysis gives us:

ReorderQuantity = LeadDemand + SafetyStock
                             - InventoryOnHand - InventoryOnOrder

Which could be rewritten as:

ReorderQuantity = ReorderPoint - InventoryOnHand - InventoryOnOrder

where ReorderPoint = LeadDemand + SafetyStock Thus,

MAX = ReorderQuantity + InventoryOnHand  + InventoryOnOrder


MAX = (ReorderPoint - InventoryOnHand - InventoryOnOrder)
    + InventoryOnHand  + InventoryOnOrder

Which simplifies into MAX = ReorderPoint that is to say MAX = MIN.

Obviously there is something fishy going on here. Did you spot what’s wrong in our reasoning?

Well, we haven’t defined any cost being associated with order operations. Consequently, the maths end up telling us something rather obvious: without extra cost for a new order (except the cost of buying the product from the supplier), the optimal planning involves an infinite number of replenishments, where the size of each replenishment tend to zero (or rather tend to 1 if we assume that no fractional product can be ordered).

Getting back to a more reasonable situation, we need to introduce the EOQ (Economic Order Quantity): the minimal amount of inventory that maintain the expected profit margin on the product. Note that our definition differs a bit from the historical EOQ that is a tradeoff between fixed cost per order and the holding cost.

In our experience, the EOQ is a complex product-specific mix:

  • It depends on volume discounts.
  • It depends on product lifetime, and potentially expiration dates.
  • It depends (potentially) on other orders being placed in the time.
  • ...

Thus, we are not going to define EOQ here, as it would go beyond the scope of this post. Instead, we are just going to assume that this value is known to the retailers (somehow). Introducing the EOQ leads to:


What’s the impact of EOQ on service level?

Let have another look at the schema. The Point 2 illustrates what happens when the reorder quantity is made larger: the replenishment cycle gets longer too (see Point 4), as it takes more time to reach the reorder point.

Other things being equal, increasing EOQ also increase service level, yet in a rather inefficient way, as it leads to a very uniform increase of your inventory levels that is not going to accurately match the demand.

Thus, we suggest taking the smallest EOQ that maintain the desired margin on the products being ordered.

Categories: insights, safety stock, supply chain Tags: inventory optimization safety stock supply chain 4 Comments