Every single SKU calls for mundane daily decisions, such as moving in more stock or changing the underlying price tag. Naturally, sticking to a fully manual process for those decisions is labor intensive, and companies have been adopting varied software-based automation solutions. The most common design strategy for these solutions remains attempting to replicate - through software - something essentially similar to the existing “human” practice.

This approach isn’t new. Arguably, it was the perspective emphasized by expert systems which mostly fell in disgrace during the second AI winter in the late 1980s and early 1990s. Yet, while the buzzword expert systems is no more - I don’t think there is any (significant) software vendor still branding themselves as an “expert system” vendor nowadays - I can’t help but notice that the underlying perspective remains prevalent in most verticals.

Indeed most software vendors, and most in-house initiatives as well, remain stuck in the perspective of replicating the existing practice, which itself emerged by mimicking the original fully manual process. The expert system of old might have been replaced by a more modern deep learning algorithm counterpart, but the whole-picture paradigm remains (nearly) untouched.

In supply chain, this pattern is striking because nearly the whole practice remains stuck on approaches that emerged in a world where computers simply did not exist yet. For example, many - most - companies still distinguish the planning team, the people who establish the demand forecast for each product, from the supply team, the people who pass the purchase orders based on the demand forecast. This approach emerged from a taylorist perspective where employees had to be as specialized as possible in order to be maximally performant at their job.

devil software

Yet, when it comes to supply chain optimization, taylorism is at odds with the reality of supply chain. Reality dictates that future demand isn’t independent from present decisions. If a much bigger purchase order is passed - yielding a much lower unit purchase price - the selling price can be lowered as well, potentially outcompeting the market, and thus vastly increasing the demand. In a world without computers, taylorism was the least-worst option because, while not being very efficient, it was tremendously scalable. Yet, in a world with computers, the situation calls for a more integrated approach that factors, even crudely, such retroactions. As the saying goes, better be roughly right than exactly wrong.

When probing clients or prospects on that matter, the situation is frequently unclear. The solutions - originally designed to replicate a purely manual process - have been in place for so long, that people have lost sight of the more fundamental aspects of the problem they are trying to solve. Too frequently, people have become specialists of the solution in place, rather than becoming specialists of the problem faced by the company. Thus, when trying to improve the situation, all the thoughts are literally anchored in what is perceived as the flaws of the current solution. Due to this bias, approaches are revisiting the problem in depth and are therefore perceived as risky both by the management and their teams. Conversely, the option of incrementally improving the existing solution are deemed safe; or at least a lot safer.

Looking back, the history of enterprise software is unfortunately full of stories of vendors forcing their “new way” of doing things, which turned out a lot more tedious and a lot more rigid than the processes of old; yielding zero or even negative productivity gains. Anecdotally, I observe that most companies operating large supply chains have not substantially reduced their white-collar workforce - supporting their supply chains - during the last two decades while blue-collars have been aggressively automated away through better plants or better warehouses. These unfortunate events injected a healthy dose of skepticism in the ecosystem against those exhausting “reorgs” for the sake for becoming compliant with a new software solution.

However, I also note that supply chain executives usually largely under-estimate the risks associated with any “smart” software - that is, any software whose correctness cannot be fully and concisely specified. Indeed, most enterprise software are nothing more than CRUD apps, i.e. diligent bookkeepers of mundane listings such as: invoices, suppliers, products, payments, etc.

Achieving business success with “dumb” software is already quite difficult - many ecommerce companies already have a hard time maintaining a good uptime for their web frontend - but whenever finicky software pieces such as machine learning components are involved, achieving success becomes a lot harder. Few companies openly communicate on this matter but whenever machine learning is involved, failures are dominating the field. However, the situation is not as bleak as it would seem, because downsides are limited, while upsides are not.

Thus, in the long run, companies who try and fail the most also happen to be the ones that succeed the most.

Nevertheless, software risks are very real, and there is certainly no point in increasing those risks even further. Yet, sticking to the paradigm of replicating a now-defunct manual practice invariably yields a series of accidental complications. For example, as the planning team and the supply team are distinct, an entire class of problems emerge for the sole purpose of keeping those teams in sync. As a rule of thumb in software development, inflating requirements by 25% increases costs by 200%. The resolution of those complications drastically increase the costs, hence in practice the risk, as companies tend to terminate - a sane reaction - initiatives exploding their budgets or their deadlines.

Thus, when facing the half-a-century-old question of adapting the software to fit the organization vs adapting the organization to fit the software, it is wise to intellectually start from the blank sheet, and first figure out just what are the high-level problems that need to be solved. Is performance measured in percentages? Or in dollars? Are we factoring correctly the long term implications? Such as training customers to only buy during the sale. Is the approach capitalizing on human inputs or merely consuming those inputs? Is the practice driven by habits or by imperious necessity? Such as two fashion collections a year, vs. two harvests a year.

The intimate understanding of the problem to be solved - which clearly differentiates the problem itself from its present solution - is the key to figure out whether the existing solution is worth preserving, or whether it should be simplified as newer software capabilities that call for a simpler, more direct, resolution of the problem.