While supply chains got an early start at digitalization back in the 80’s and 90’s with electronic inventory management and EDI (Electronic Data Interchange), many software vendors - and thus most of their clients - gradually fell behind their time in the two decades that followed. While it’s relatively straightforward to refresh user interfaces, for example turning desktop apps into web apps, it’s usually much more challenging to revisit the core architectural design choices supporting a piece of software. Most of those solutions had never been engineered for cloud computing, machine learning or a mobile-first user experience - to name a few.
Lokad advocates processes and technologies that make the most of what modern networked computing power can deliver for supply chains through better predictive optimizations. We refer to this approach as the Quantitative Supply Chain perspective. However, for supply chain practitioners, this message can be somewhat confusing, because Quantitative Supply Chain isn’t a variation on the old ways of optimizing supply chain, it’s a different beast altogether.
Thus, let’s have a closer look at the traditional modules as usually found in leading1 APS Advanced Planning and Scheduling) systems, with a special interest on stock replenishments in a 2-echelon network - e.g. warehouses and stores - and let’s compare those modules with Lokad’s way of delivering predictive optimization. For the sake of concision, in the following, we interchangeably refer to quantitative supply chain (the vision) or to Lokad (the software implementation).
A calendar module provides mechanisms to fine-tune ordering situations, that is, deciding when to buy, where a fixed-order buying cycle is unavoidable.
The very idea that supply chain practitioners should fine-tune their ordering schedule is the wrong way to look at the problem. Ordering comes with multiple types of constraints: calendar, MOQs, budget, etc. Some constraints can be lax, like “no schedule, any day goes except July 4th and December 25th”, or tight, like “the container must be exactly full”. In any case, the system should seek - within all feasible ordering options - the ones that maximize ROI. This process, the “optimization”, must be strictly automated. There is no reason to manually fine-tune anything. Performant numerical solvers were not routinely available in the 80’s, but nowadays, this is not the case anymore.
Thus, at Lokad, we gather all the relevant constraints, which themselves are treated as data, i.e. the “database” of authorized ordering schedules, and then roll-out the adequate numerical solvers to get the job done.
See also Humans in modern supply chain.
The event module manages demand surges and dips due to planned events, such as new product introductions, advertizing, price promotions, catalog drops and flyers.
The earliest forecasting models (e.g. Holt-Winters), discovered back in the 50’s and 60’s were all centered around time-series. Thus, most APS adopted time-series centric perspectives. Unfortunately, supply chain problems cannot be easily framed into time-series: stock-outs, promotions, introductions and phase-outs are as many phenomenons that don’t fit into the traditional mathematical framework associated with time-series. Thus, in order to cope with broken-by-design time-series forecasting models, APS resort to manual “corrections” to be applied to both historical data - e.g. smoothing out the uplift of past promotions - and to demand forecasts - introducing biases in future estimates.
At Lokad, our approach is to treat those “disturbances” as plain historical data. We will never know for sure what would have happened if a past promotion had not been triggered. The only thing we know for sure is the characteristic of the promotion - i.e. start and end dates, discount mechanism, scope, etc. - and its resulting sales. Thus, the statistical model needs to be frontally engineered to process the historical data as it exists, instead of being “stuck” with a narrow time-series perspective.
Lokad gathers the relevant events, and processes all this data with statistical models geared toward high-dimensional problems, typically falling under the generic umbrella of “machine learning” algorithms. The latest version of our forecasting technology is centered on differentiable programming. However, it’s still very much a work-in-progress as the machine learning field itself is still evolving rapidly. The need to manually tweak the forecasting results should be treated as a software defect to be fixed, not as an opportunity to leverage supply chain practitioners as “human coprocessors”.
In practice, the bulk of the challenge consists of gathering the relevant data about both past events and future ones (.e.g planned promotions) - which has little to do with statistics. Also, the analytic systems, may it be Lokad or an APS, are not the place to perform high volumes of manual data entries. Those data entries belong to the transactional realm - aka the ERP - which collects all the facts2 about the supply chain.
Order Expedite Management module
An order expedite management acts as an early warning tool, and provides the buyers with a daily refreshed view of overdue orders and short shipments.
The whole approach of exceptions and alerts is a rather dated perspective to mitigate problems within complex systems in the software industry. This approach has been heavily adopted by software vendors back in the 80’s and 90’s, because exceptions and alerts are straightforward to implement on top of a SQL database. It only takes a SELECT statement with a few WHERE conditions. However, as a whole, this approach is making a poor use of the scarce resource that the supply chain practitioners represent, as it tends to overburden them quickly, to the point that alerts are not “alerts” anymore and the feeling of emergency or action to be taken gets lost.
Lokad favors and delivers strict ROI-based prioritization of the “items” of interest which are presented to supply chain practitioners. As a rule of thumb, producing 1 million numbers a day is easy, producing 10 numbers worth being read by a human every day is difficult. Overdue orders don’t always matter: maybe stock levels are still high anyway, or maybe it’s just a recurrent problem for this supplier that has been going on forever. Short shipments should also be automatically processed in order to correct supplier payments accordingly, but they don’t always call for an immediate response. For an “item” to have an ROI, the “item” should be actionable, and the ROI represents the associated estimated ROI of the action. Lokad shines by delivering bespoke prioritizations that are exactly tailored to the specificities of the supply chain of interest.
Deal/Forward Buy module
The deal/forward buy types of modules let the buyer enter deal data into the system in advance of the deal period. This allows the system to purchase replenishment inventory matching the deal window, to calculate forward buy quantities and to determine when to purchase these quantities.
Most APS were initially implemented with a naive perspective where the unit price was assumed to be self-sufficient - pricing-wise - to compute an optimized order quantity. However, reality had a higher level of detail. Suppliers frequently have complex pricing structures: MOQs, price breaks, discounts on quarterly quotas, temporary deals, etc.
Lokad treats all those elements as input data and input constraints and leverages a direct numerical resolution of a constrained problem to deliver an optimized order quantity. For example, a temporary discount of the supply gives an economic incentive for the company to temporarily overstock: the order quantity becomes the numerical trade-off between the extra discount (linear gains) and the overstock risk (super-linear costs). We deliver the order quantity that directly optimizes this trade-off, in addition to all the other relevant economic forces.
Order Analysis module
The order analysis modules identify potential out-of-stocks. Those modules provide the up-to-date information needed to understand the stock status for items such as imports, or those being custom manufactured — long lead-time items for which there are often two, three or more orders outstanding.
This is another case of somewhat simplistic “ease-of-implementation” software design. Back in the 80’s, it was difficult to perform any kind of network-wide analytics, thus, most software vendors defaulted to adopting a software design enforcing a “SKU isolation”. Each SKU is processed in isolation, and the computed statistical estimators - like the expected stock-out probability for the next ordering cycle - are all fully specific of the one SKU of interest.
At Lokad, we observe that every single SKU competes with all the other SKUs for the budget of the company. Thus, it does not really matter whether the stock-out probability of a given SKU is high or low. The only thing that matters is whether the payback for ordering more of this SKU is high enough to not be outcompeted by any alternative option - i.e. ordering more from another SKU - readily available to the company. For example, if the SKU is associated with a product that has a high cost, a very low margin and is only bought by a single large corporate client, who is about to leave according to the sales team, maintaining the service level for this SKU is a sure recipe for creating dead inventory.
In practice, Lokad delivers prioritized order quantities that directly reflect the end-to-end economics of the supply chain.
Overstock Transfer module
Overstock transfer modules help the buyers to manage overstock in the warehouses. It allows the buyers to transfer excess inventory from one location to another that has sufficient demand, in order to avoid making additional purchases from the vendor.
In supply chain, it’s nearly always possible - but usually not profitable - to move something from location A to location B. Thus, when facing a network where the same item can be stored in many locations, it is only natural to treat any stock movement between any two locations as a potential decision.
Thus, Lokad has built-in capabilities to perform network-level optimization of this nature, basically brute-forcing all the available options for stock movements. The most challenging part of the undertaking is to properly reflect the economic friction associated with moving the stock. Indeed, the friction is typically not properly reflected by stock movements at the SKU level. For example, transportation costs tend to be highly non-linear: if a truck has to be dispatched, let’s make the most of its available capacity.
Unfortunately, the number of options grows much faster than the number of SKUs. Let’s consider a network that includes 2000 items stored in 10 warehouses resulting in 10 x 2000 = 20,000 SKUs in total. The total number of edges to be considered for the stock movements is 10 x (10 - 1)* 2000 = 180,000 edges, a number much larger than the original number of SKUs. For the readers who happen to be familiar with algorithms, it’s a straightforward case of _quadratic_ complexity.
Yet, when considering the processing power available nowadays, this specific case of quadratic complexity is mostly a non-issue - assuming that the underlying software has been properly engineered for this type of numerical exploration. Indeed, supply chain networks very rarely exceed 10,000 locations, even when looking at the most gigantic companies; and a few heuristics can be used to dramatically lower the number of edges to be explored in practice, as many pairs of locations are bound to be nonsensical, such as rebalancing stocks between Paris and Sydney.
However, back in the 80’s and 90’s, due to the computing hardware available at the time, APS were already struggling to cope with the number of SKUs. Naturally, in this context, coping with problems of quadratic complexity was simply out of the question.
Fast-forward to the present day, many vendors had to introduce separate modules to deal with any kind of problem entailing network-wide optimization. There is no real business motivation for having a separate module. This state of affairs mostly reflects that the original vendor has been duct-taping its software product to cope with a class of problems that are conflicting with design choices that were made two decades before.
In contrast, Lokad decided to frontally address this class of problems that are first-class citizens in our software stack. Naturally, the actual resolution of the challenge still entails efforts in practice, because all the transportation costs and constraints need to be made explicit.
Planning modules let sales teams or marketing teams pre-plan orders for specific events such as promotions or special orders. Teams can create a planned order in the system more than one year in advance of the planned order date.
Lokad’s approach starts by establishing a clear separation between facts and predictions. Let’s consider a B2B retail network. If a client announces on February 10th that they will be ordering 1000 units, with an expected delivery date on March 1st, then, this announcement is a fact. If this client has a habit of revisiting at the last minute their intended delivery date - typically postponing it by 1 week - then this pattern must be taken into account. However, this pattern analysis falls into the predictions side of the problem.
Lokad tackles this class of problems with a technology that delivers general probabilistic forecasts and not just demand probabilistic forecasts. Any statement made about the future, say an expected arrival date, tends to be uncertain to some degree. Supply chains require versatile high-dimensional predictive tools, which is exactly why Lokad went down the path of differentiable programming.
Also, facts must not be collected by the analytics layer, neither Lokad nor the APS. Not because the software can’t do it. Designing software to collect facts is relatively straightforward, but because it generates a high amount of accidental vendor lock-in. Indeed, as soon as data entries flow through a system in the company, then this system becomes the de facto master controller of this data.
Our experience at Lokad indicates that outdated analytics layers frequently stick around for as long as an extra decade past their point of obsolescence, for no other reason than mission-critical data would be lost if this system was turned off. Meanwhile the original software vendor still collects maintenance fees, which gives the vendor a big incentive to create the problem in the first place.
The projections modules enable buyers to create reports that project future demand and future purchases as far as one year in advance. These forecasts can be shared with the suppliers, in order to let them plan more accurately their respective supply capacities.
Naked forecasts are harmful, and can no longer be considered as a sane supply chain practice. Check out the naked forecasts antipattern for more information. This does not mean that Lokad has no forecasting capabilities, we do. However, we maintain that the classic time-series forecasting approach is simply wrong and needs to end. Classic time-series can work for very high volume, very steady products, but for anything else - and especially erratic or intermittent demand - probabilistic forecasting is the way to go.
Furthermore, historical time-series forecasting models - say, Holt-Winters or Arima - had massive shortcomings whenever the product history was too short, too erratic, too atypical, too low volume, … Most software vendors responded to those problems with two approaches, equally dysfunctional in their own ways:
- Human coprocessors: as the forecasting model frequently ends up producing nonsensical results, human operators - i.e. planners - are used by the system as “coprocessors” to manually override forecasts whenever “numbers don’t feel right”. Unfortunately, the task is endless as forecasts have to be continuously refreshed, forcing planners into a never-ending cycle of manual overrides. Such a task also produces undesired side effects, where human operators are used to consider that forecasts are wrong and tend to correct them even when they are not, often based on pure gut feeling.
- Model competition: as each time-series forecasting model has its own strengths and weaknesses, a competition of many forecasting models - the reasoning goes - should yield good results by letting the system “pick” the best model in every single situation. Unfortunately, this fails for two reasons. First, all models happen to rely on a time-series framework, and thus, are all subject to the very same limitations. Second, all models are “classic” and fail to deliver the probabilistic forecasts that supply chain requires.
Furthermore, forecasting isn’t just about the demand. Lead times should be forecasted as well. Also, the structure of supply chain matters. In B2B, a steady stream of sales can hide the fact that all orders originate from the same client. If this client is lost, many items instantly become overstocked if not dead stock. A proper predictive optimization of the supply chain must take this risk into account. Lokad’s technology has been engineered accordingly.
On the specific angle of sharing forecasts with suppliers, while a better coordination between buyers and suppliers is always preferable, we have observed that successful forecast-driven coordinations between companies have been few and far between. Suppliers have many clients of their own anyway3. Thus, even if the forecasts delivered by the local buyers were to be accurate, the supplier does not have a way to reconcile all those disparate forecasts : the sum of the forecasts is NOT the forecast of the sum.
Surprisingly enough, for some APS, security features are not all present by default in the system, but provided as a separate module. The purpose of a security module is to prevent access for some users. It also enables the management to secure component actions and restrict views for important areas of the system, such as company control factors, item maintenance, vendor maintenance, orders, deals and other components.
In software jargon, here we are talking about the cross-cutting concerns of authentication and authorization.
- The authentication ensures that end-users doing anything in the system are really the person that the system believes they are. Here, Lokad adopts the modern approach that authentication must be delegated whenever possible. End-users should not have yet another password to deal with. Instead, Lokad leverages SAML as the de facto industry standard to delegate the authentication to federated identity management (FIM).
- The authorization delivers the fine-grained control on who can do what within the system. Here, Lokad features an extensive canonical ACL (Access-Control List) system, which is also the de facto practice of modern enterprise systems. Lokad also features some personalization capabilities, which supplements the ACL from an user-experience perspective.
Lokad activates all its security features by default, no matter which package has been originally sold to any of our clients. We believe that optional security4 is a terrible practice from software vendors. Securing software is already exceedingly difficult; such a practice only makes it worse.
To be fair, the ACL angle is probably the least challenging concern of the whole matter of software security. A more interesting question is: how much security by design does the very architecture of the system deliver. However, answering this question goes beyond the scope of the present article.
Export to Excel
The export to Excel module provides an easy way of transporting data for use in other systems or for analysis.
As it’s reasonably straightforward to produce Excel sheets5, many vendors - including Lokad - are featuring capabilities in this area. Yet, a closer examination indicates that most vendors only deliver half-baked capabilities. Let’s review the salient points of good export-to-excel implementations:
- Complete historization: the system should offer the possibility to track and re-download every single Excel sheet ever exported. Indeed, when facing incorrect figures in the spreadsheet - a problem that will happen (if only because of incorrect data inputs) - not having full traceability on the code path that ended up generating this spreadsheet will vastly complicate, slow down - and sometimes prevent - any attempt at debugging and fixing the problem.
- Maxing-out the spreadsheet capabilities: practitioners expect to be able to generate fat spreadsheets of up to 1 million lines6 that is, actually the limit of Excel itself. Thus, the system should be able to generate heavyweight spreadsheets in order to not get in the way of practitioners who just want to do their own bit of data crunching on their own within a familiar tool. Needless to say, practitioners also expect those fat exports to be swift.
- Built-in protection against spreadsheet attacks: Excel is a dangerous attack vector for large organizations. Unfortunately, securing the spreadsheets generated by the system cannot be an afterthought, it has to be an integral part of the design, pretty much from Day 1.
- Programmatic configurability of the exports: Having to deal with two pieces of software - namely the APS and Excel - is painful enough for the supply chain practitioners. The situation should not be made worse by spreadsheets that always require some extra post-extraction processing to become usable. This implies that everything happens before the extraction, within the APS. Thus, the APS needs programmatic capabilities to properly prepare the spreadsheets prior to the export.
Lokad delivers all of the above, while most of our competitors don’t. The devil is in the details.
Our conviction is that Lokad is a simpler piece of software than most APS on the market. Yet, our capacity to deliver supply chain performance through predictive technologies is greater. Indeed, most of the APS complexity is accidental, stemming from software design choices made decades ago facing inward-looking software problems long gone. However, most architectural software choices, once made, cannot be undone.
The term “Advanced Planning System” (APS) is nowadays mostly a misnomer, as those software products primarily reflect 80’s and 90’s visions of what supply chain software ought to be. Software-wise, many choices made at the time did not stand the test of time. ↩︎
In order to keep the applicative landscape of the supply chain sane, it’s critically important to separate systems that operate on facts (accounting, payments) from those that operate on predictions (forecasting). The first systems are expected to be absolutely correct down to the last cent, while the latter are expected to be roughly correct. The two visions are profoundly different, and result in radically different software designs and processes. ↩︎
If a supplier is exclusively serving your company, then this supplier should be treated as an integral part of your supply chain. Demand forecasts are only intermediate numerical artifacts, and the only numbers that truly matter are the quantities to be produced, as the whole production is dedicated to your company anyway. ↩︎
Letting the client pay for security is fair if the product, software or hardware, is primarily intended for security. It’s fair that vendors selling, say, hardware authentication devices get to charge for them. We oppose the practice of selling insecure products, where the security comes as an add-on. ↩︎
The old binary Excel 97 format - aka the “.xls” files - was a genuinely insane piece of engineering. The newer Excel 2003 format, based on XML - aka the “.xlsx” - is still awful, but if you stick to the “good parts”, it’s possible to preserve the sanity of the software engineering team in charge of the export-to-Excel feature. ↩︎
While dealing with one spreadsheet of 1 million lines is bad, dealing with 20 spreadsheets - 50,000 lines each - is worse. Modern systems should largely alleviate the need to resort to spreadsheets in the first place. However, if supply chain practitioners, in spite of all the efforts, have the clear intent of using Excel for their analytics, then the “system” should not get in the way. ↩︎