Promotion planning in general merchandise retail – Process challenges

Published on by Joannes Vermorel.

Illustration In our previous post, we covered data challenges in promotion forecasts. In this post, we cover process challenges: When are forecasts produced? How they are used? Etc. Indeed, while getting accurate forecasts is tough already, retailers frequently do not leverage forecasts the way they should, leading to sub-optimal uses of the numerical results available. As usual, statistical forecasting turns to be a counter-intuitive science, and it’s too easy to take all the wrong turns.

Do not negotiate the forecast results

The purchasing department usually supervises the promotion planning process. Yet, as much haggling can be of tremendous power to obtain good prices from suppliers, haggling over forecasts don’t work. Period. Yet, we routinely observe that promotion forecasts tend to be some kind of tradeoff negotiated between Purchasing and Supply Chain, or between Purchasing and IT, or between Purchasing and Planning, etc.

Assuming a forecasting process exists - which may or may not be accurate (this aspect is a separate concern) - then, forecasts are not up to negotiation. The forecasts are just the best statistical estimate that can be produced for the company to anticipate the demand for the promoted items. If one of the negotiating parties has a provably better forecasting method available, then this method should become the reference; but again, no negotiation involved.

The rampant misconception here is the lack of separation of concerns between forecasting and risk analysis. From a risk analysis perspective, it’s probably fine to order a 5x bigger volume than the forecast if the supplier is providing an exceptional deal for a long lived product that is already sold in the network outside the promotional event. When people “negotiate” over a forecast, it’s an untold risk analysis that is taking place. However, better results are obtained if the forecasting and risk analysis are kept separate, at least from a methodological viewpoint.

Remove manual interventions from the forecasts

In general merchandise retail, all data process involving manual operations are costly to scale at the level of the network: too many items, too many stores, too frequent promotions. Thus, from the start, the goal should be an end-to-end automated forecasting process.

Yet, while (nearly) all software vendors promise fully automated solutions, manpower requirements creep all over the place. For example, special hierarchies between items may have to be maintained just for the sake of the forecasting systems. This could involve special item groups dedicated to seasonality analysis, or listing of "paired" products where the sales history of the old product is used as a substitute when the new product is found having no sales history in the store.

Also, the fine tuning of the forecasting models themselves might very demanding, and while supposedly a one-off operation, it should be accounted for as an ongoing operational cost.

As a small tip, for store networks, beware of any vendors that promise to visualize forecasts: spending as much as 10s per data point to look at them is hideously expensive for any fairly sized retail network.

The time spend by employees should be directed to the areas where the investment is capitalized over time - continuously improving the promotional planning - rather than consumed to merely sustain the planning activity itself.

Don’t omit whole levels from the initiative

The most inaccurate forecasts are that retailers produce are the implicit ones: decisions that reflect some kind of underlying forecasts but that nobody has identified as such. For promotion forecasts, there are typically three distinct levels of forecasts:

  • national forecasts used to size the overall order passed to the supplier for the whole retail network.
  • regional forecasts used to distribute the national quantities between the warehouses.
  • local forecasts used to distribute the regional quantities between the stores.

We frequently observe that distinct entities within the retailer’s organization end-up being separately responsible for parts of the overall planning initiative: Purchasing handles the national forecasts, Supply Chain handles regional forecasts and Store Managers handles the local forecasts. Then, the situation is made worse when parties start to haggle over the numbers.

When splitting the forecasting process over multiple entities, nobody gets clearly accountable for the (in)effectiveness of the promotional planning. It’s hard to quantify the improvement brought by any specific initiative because results are mitigated or amplified by interfering initiatives carried by other parties. In practice, this complicates attempts at continuously improving the process.

Forecast as late as you can

A common delusion about statistical forecasting is the hope that, somehow, the forecasts will get perfectly accurate at some point. However, promotion forecasts won’t ever be even close to what people would commonly perceive as very accurate.

For example, across Western markets, we observe that for the majority of promoted items at the supermarket level, less than 10 units are sold per week for the duration of the promotion. However, forecasting 6 units and selling 9 units already yields a forecast error of 50%. There is no hope of achieving less than 30% error at the supermarket level in practice.

Yet, while the forecasts are bound to an irreducible level of inaccuracy, some retailers (not just retailers actually) exacerbate the problem by forecasting further in the future than what it is required.

For example, national forecasts are typically needed up to 20 weeks in advance, especially when importing goods from Asia. However neither regional nor local forecasts need to be established so long in advance. At the warehouse level, planning can typically happen only 4 to 6 weeks in advance, and then, as far stores are concerned, quantitative details of the planning can be finalized only 1 week in advance before the start of the promotion.

However, as the forecasting process is typically co-handled by various parties, a consensus emerges for a date that fits the constraints of all parties, that is, the earliest date proposed by any of the parties. This frequently results in forecasting demand at the store level up to 20 weeks in advance, generating wildly inaccurate forecasts what could have been avoiding altogether by postponing the forecasts.

Thus, we recommend tailoring the planning of the promotions so that quantitative decisions are left pending until the last moment when final forecasts are finally produced, benefiting from the latest data.

Leverage the first day(s) of promotional sales at the store level

Forecasting promotional demand at the store level is hard. However, once the first day of sales is observed, forecasting the demand for the rest of the promotion can be performed with a much higher accuracy than any forecasts produced before the start of the promotion.

Thus, promotion planning can be improved significant by not pushing all goods to the stores upfront, but only a fraction, keeping reserves in the warehouse. Then, after one or two days of sales, promotion forecasts should be revised with the initial sales to adjust how the rest of the inventory should be pushed to the stores.

Don’t tune your forecasts after each operation

One of the frequent questions we get from retailers is if we revise our forecasting models after observing the outcome of a new promotion. While this seems a reasonable approach, in the specific case of promotion forecasts, there is a catch and a naive application of this idea can backfire.

Indeed, we observe that, for most retailers, promotional operations, that is, the set of products being promoted at the same period typically with some unified promotional message, come with strong endogenous correlations between the uplifts. Simply put, some operations work better than other, and the discrepancy between the lowest performing operations and the highest performing operations is no less than a factor 10 in sales volume.

As a result, after the end of each operation, it’s tempting to revise all forecasting models upward or downward based on the latest observations. Yet, it creates significant overfitting problems: revised historical forecasts are artificially made more accurate than they really are.

In order to mitigate overfitting problems, it’s important to only revise the promotion forecasting models as part an extensive backtesting process. Backtesting is the process of replaying the whole history, iteratively re-generating all forecasts up to the last and newly added promotional operation. An extensive backtesting mitigates large amplitude swings in the anticipated uplifts of the promotions.

Validate “ex post” promotion records

As discussed in the first post of this series, data quality is an essential ingredient to produce sound promotion forecasts. Yet, figuring out oddities of promotions months after they ended is impractical. Thus, we suggest not delaying the review of the promotion data and doing it at the very end of each operation, while the operation is still fresh in the mind of the relevant people (store managers, suppliers, purchasers, etc).

In particular, we suggest looking for outliers such as zeroes and surprising volumes. Zeroes reflect either that the operation has not been carried out or that the merchandise has not been delivered to the stores. Either ways, a few phone calls can go a long way to pinpoint the problem and then apply proper data corrections.

Similarly, unexpected extreme volumes can reflect factors that have not been properly accounted for. For example some stores might have allotted display space at their entrance, while the initial plan was to keep the merchandise in the aisles. Naturally, sales volumes are much higher, but it’s only a mere consequence of an alternative facing.

Stay tuned, next time, we will discuss of the optimization challenges in promotion planning.

Categories: accuracy, insights, forecasting Tags: promotion forecasting insights No Comments

Promotion planning in general merchandise retail – Data Challenges

Published on by Joannes Vermorel.

promotions.jpg

Forecasting is almost always a difficult exercise, but there is one area in general merchandise retail considered as one order of magnitude more complicated than the rest: promotion planning. At Lokad, promotion planning is one of the frequent challenges we tackle for our largest clients, typically through ad-hoc Big Data missions.

This post is the first of a series on promotion planning. We are going to cover the various challenges that are faced by retailers when forecasting promotional demand, and give some insights in the solutions we propose.

The first challenge faced by retailers when tackling promotions is the quality of the data. This problem is usually vastly underestimated, by mid-size and large retailers alike. Yet, without highly qualified data about past promotions, the whole planning initiative faces a Garbage In Garbage Out problem.

Data quality problems among promotion's records

The quality of promotion data is typically poor - or at least much worse than the quality of the regular sales data. A promotional record, at the most disaggregated level represents an item identifier, a store identifier, a start date (an end date) plus all the dimensions describing the promotion itself.

Tthose promotional records have numerous problems:

  • Records exist, but the store did not fully implement the promotion plan, especially with regards of the facing.
  • Records exist, but the promotion never happened anywhere in the network. Indeed, promotion deals are typically negotiated 3 to 6 months in advance with suppliers. Sometimes a deal gets canceled with only a few weeks’ notice, but the corresponding promotional data is never cleaned-up.
  • Off the record initiatives from stores, such as moving an overstocked item to an end aisle shelves are not recorded. Facing is one of the strongest factor driving the promotional uplift, and should not be underestimated.
  • Details of the promotion mechanisms are not accurately recorded. For example, the presence of a custom packaging, and the structured description of the packaging are rarely preserved.

After having observed similar issues on many retailer's datasets, we believe that the explanation is simple: there is little or no operational imperatives to correct promotional records. Indeed, if the sales data are off, it creates so many operational and accounting problems, that fixing the problem become the No1 priority very quickly.

In contrast, promotional records can remain wildly inaccurate for years. As long nobody attempts to produce some kind of forecasting model based on those records, inaccurate records have a negligible negative impact on retailer operations.

The primary solution to those data quality problems is data quality processes, and empirically validate how resilient are those processes when facing the live store's conditions.

However, the best process cannot fix broken past data. As 2 years of good promotional data is typically required to get decent results, it’s important to invest early and aggressively on the historization of promotional records.

Structural data problems

Beyond issues with promotional records, the accurate planning of promotions also suffers from broader and more insidious problems related to the way the information is collected in retail.

Truncating the history: Most retailers do not indefinitely preserve their sales history. Usually "old" data get deleted following two rules:

  • if the record is older than 3 years, then delete the record.
  • if the item has not been sold for 1 year, then delete the item, and delete all the associated sales records.

Obviously, depending on the retailer, thresholds might differ, but while most large retailers have been around for decades, it’s exceptional to find a non-truncated 5 years sales history. Those truncations are typically based on two false assumptions:

  • storing old data is expensive: Storing the entire 10-years sales data (down to the receipt level) of Walmart – and your company is certainly smaller than Walmart – can be done for less than 1000 USD of storage per month. Data storage is not just ridiculously cheap now, it was already ridiculously cheap 10 years ago, as far retail networks are concerned.
  • old data serve no purpose: While 10 years old data certainly serve no operational purposes, from a statistical viewpoint, even 10 years old data can be useful to refine the analysis on many problems. Simply put, long history gives a much broader range of possibilities to validate the performance of forecasting models and to avoid overfitting problems.

Replacing GTINs by in-house product codes: Many retailers preserve their sales history encoded with alternative item identifiers instead of the native GTINs (aka UPC or EAN13 depending if you are in North America or Europe). By replacing GTIN with ad-hoc identification codes, it is frequently considered that it becomes easier to track GTIN substitutions and it helps to avoid segmented history.

Yet, GTIN substitutions are not always accurate, and incorrect entries become near-impossible to track down. Worse, once two GTINs have been merged, the former data are lost: it’s no more possible to reconstruct the two original sets of sales records.

Instead, it’s a much better practice to preserve GTIN entries, because GTINs represent the physical reality of the information being collected by the POS (point of sales). Then, the hints for GTIN substitutions should be persisted separately, making it possible to revise associations later on - if the need arises.

Not preserving the packaging information: In food retail, many products are declined in a variety of distinct formats: from individual portions to family portions, from single bottles to packs, from regular format to +25% promotional formats, etc.

Preserving the information about those formats is important because for many customers, an alternative format on the same product is frequently a good substitute to the product when the other format missing.

Yet again, while it might be tempting to merge the sales into some kind of meta-GTIN where all size variants have been merged, there might be exception, and not all sizes are equal substitutes (ex: 18g Nutella vs 5kg Nutella). Thus, the packaging information should be preserved, but kept apart from the raw sales.

Data quality, a vastly profitable investment

Data quality is one of the few areas where investments are typically rewarded tenfold in retail. Better data improve all downstream results, from the most naïve to the most advanced methods. In theory, data quality would suffer from the principle of diminishing returns, however, our own observations indicate that, except for a few raising stars of online commerce, most retailers are very far from the point where investing more in data quality would not be vastly profitable.

Then, unlike building advance predictive models, data quality does not require complicated technologies, but a lot of common sense and a strong sense of simplicity.

Stay tuned, the next time, we will discuss of process challenges for promotion planning.

Categories: Tags: promotion forecasting retail data No Comments