Years ago, but also years after founding Lokad, I realized that no single app would ever deliver anything close to greatness as far as supply chain optimization was concerned. We did our best but it wasn’t enough. No matter how many features we were pouring into the earliest versions of Lokad, each new client seemed desperately unlike all the previous ones we had managed to cover so far. Supply chain challenges are simply too diverse and too chaotic to be framed into a sane number of menus, buttons and options.

A python snake, unrelated to Python, the programming language.

As a matter of fact, the majority of our fellow competitors acknowledge this situation and went down the path of building software products featuring an insane number of menus, buttons and options, all of this, as a desperate attempt to cope with all the supply chain challenges. Unfortunately, this path leads to software monstrosities that turn into spectacular failures when deployed at scale. I refer to this antipattern as the Non-Euclidian Horror.

Thus, facing a class of problems - i.e. supply chain challenges - that simply could not be solved with a single app, we started, partly accidentally1, to address the meta-problem instead: how to deliver bespoke apps where each app would be dedicated to the resolution of a single problem for a given situation, e.g. replenishment optimization for one specific company.

Delivering bespoke software for businesses is nothing new. The software industry started that way in the 60s, and later evolved during the 80s towards the dominant shrinkwrap model that we know today. As a rule of thumb, bespoke software tends to have many undesirable properties compared to shrinkwrap: higher upfront investments, lengthy setups, higher maintenance costs, higher risks, etc.

Yet, the experience that I had acquired during the first few years of Lokad indicated that, as far as supply chain optimization was concerned, bespoke software had one key advantage: it actually delivered great results. Indeed, while our original app was, at best, delivering passable results2 no matter how accurate the forecasts, those prototypes were frequently doing great. Furthermore, the only trick involved was the extreme specialization of the piece of software intended for the problem at hand.

After exhausting what seemed to be all alternatives, we concluded that delivering bespoke apps was the only way to go. Yet, scalability; how to deliver many apps, and maintainability; how to keep maintenance costs under control, were two core concerns. First, we had to choose a programming language. At the time, we considered many options: R, Python, JavaScript, Lua, C#, … and rolling out our own domain-specific programming language (DSL), which would later be known as Envision. Discussing the pros and cons of all those options would be somewhat tedious3, thus for the sake of clarity, the discussion was to be kept around the choice of Python vs. Envision; with Python being the strongest contender against rolling-out our own DSL.

Python was appealing because of its simplicity, and because of all its rich third-party ecosystem of libraries, especially in the machine learning area4. It was also a low cost option for Lokad: as Python, and pretty much all its popular libraries, are open source, we would have just repackaged a narrow subset of Python, whitelisting a few dozen of hand-picked packages, and be done with it. Most of the work for Lokad would have been centered around delivering a PaaS experience around Python ala Heroku, but as tailored as possible towards supply chain challenges.

Yet, here is a litmus test that we considered: was it reasonable to expect that a business analyst - later to be known as a supply chain scientist - working 1 day per week for 6 months would deliver a production-grade app to solve a mission critical supply chain challenge, like replenishment, for a $10M company? When looking at the Python option, it was clear that we could never even start getting close to such a level of operational efficiency.

Firstly, Python requires software engineers. Indeed, Python, like any full-fledged programming language, exposes tons of technical intricacies to whoever is writing code in Python. While the the role of the supply chain scientist was only formalized later on, we had the intuition early on that even considering smart talented people, expecting them to be both experts in supply chain engineering and experts in software engineering was too much. We needed programmatic capabilities accessible to a large spectrum of technically-minded people, not just professional software engineers.

Thus, we crafted Envision as a language to eliminate entire classes of technical problems that are unavoidable with Python. For example:

  • Objects can be null, dates can be absurdly far into the past or into the future, NaN can happily propagate through your data pipeline, strings can become absurdly large… In supply chain, these “features” are nothing but problems waiting to happen.
  • Object-oriented elements (i.e. classes) are guaranteed to be misused5, and the same can be said about custom exceptions or Regexes. The mere presence of those elements is, at best, an unhealthy distraction.
  • Multiple basic operations like parsing disparate tabular files (incl. Excel spreadsheet), are not part of the language, and requires dealing with many disparate packages, each one having their own technical intricacies.

None of these classes of technical problems can be removed from Python without crippling the language itself. Envision, as a programming language, is accessible to supply chain specialists (vs software specialists), only because of its razor-sharp focus on predictive supply chain optimization problems.

Think of the last time you had to perform calculations with an Excel spreadsheet, and picture yourself dictating all the changes you brought to this Excel sheet over the phone without being able to see the Excel sheet for yourself. That’s what a supply chain optimization initiative driven by practitioners, but implemented by software engineers (non-specialists in supply chain) look like. Business spends an enormous amount of time to convey what it wants to IT; and IT spends an enormous amount of time trying to figure out what business wants. After a decade of experience at Lokad, I observe that relying on software developers, who are not supply chain specialists, to deliver a quantitative supply chain optimization initiative multiplies the costs by at least a factor 5, no matter how agile and talented the software team can be.

Secondly, the maintenance costs of hasty Python prototypes go through the roof. Outside the software industry, few people realize that software engineering6 is mostly about keeping maintenance costs under control. Yet, cracking supply chain optimization problems is a messy process: the data from many (poorly) reliable systems need to be reliably pipelined, imperfect and ever-changing processes need to documented and modelized, the optimization metrics reflect a business strategy in a constant state of flux, etc. As a result, whichever piece of software gets written to deliver the supply chain optimization, it always embeds a massive dose of domain-specific complexity, merely coping with what the world is throwing at us.

Yet, time is of the essence. There is no point in having the perfect plan for last year’s production plan. As a rule of thumb, it is safe to assume that the day the software prototype starts working, it will be moved to production in a matter of weeks, no matter whether the prototype is well or badly written.

Expecting that the upper management will approve a 6-month lag to rewrite the prototype to make it production-grade from a maintenance perspective is wishful thinking. Yet, putting a hasty Python prototype in production is the recipe for epic maintenance overheads, fighting an uphill battle with a neve rending stream of bugs that will need to be duct-taped 24 / 7.

Thus, the only practical way to keep the production sane is to write the prototype with a programming language that ensures a high degree of correctness by design. For example, unlike Python, Envision delivers:

  • Finite execution time guaranteed at compile time: when processing multiple terabytes of data, it becomes very tedious to wait for hours before realizing that a calculation is just never going to terminate. Bounded memory consumption guaranteed at compile time: struggling with out-of-memory errors in the nightly production batch is anything but fun, and, in practice, severely disrupts operations.
  • Atomic reads and writes: Envision prevents, by design, concurrent reads and writes within the filesystem, even when files are being pushed through FTP while scripts are executing. The filesystem backing Envision is pretty much a Git tailored for giganormous flat files. Without proper data versioning, many bugs turn into heisenbugs: by the time somebody delves into the problem, the data has been refreshed, and the problem can’t be replicated anymore.
  • Ambient scale-out execution of the program over a cloud of computing resources, removing all the parallelization hurdles that are unavoidable as soon as the data exceeds a few tens of gigabytes.

Generic programming languages deliver little correctness-by-design; and Python, leaning far in the late binding spectrum, delivers exceedlingly little in this area. Even when considering better alternatives - from the correctness-by-design perspective - like Rust, those alternatives are nowhere near satisfying for supply chain optimization.

Hereare a few more areas where Envision shines in ways that are simply not accessible to Python:

Defence in-depth: As soon as someone starts writing code in your organization, unless very special precautions are taken7, their code becomes an immediate liability from an IT security perspective. Through Python, it is pretty much possible to do anything on the machine running the Python script. Properly sandboxing Python in practice is a devilishly complicated problem. In particular, any string produced by the Python script is a potential injection vector. While SQL injections are notorious, (too) few people realize that even plain flat text files like CSV are vulnerable to injection attacks. Envision delivers a degree of security that simply cannot be replicated with Python. Data breaches are on the rise, throwing bits of Python all over the place isn’t going to do any good for IT security.

Transparent performance: If a program is impractically slow to run, then this program should not even compile in the first place8. If a program is made one line shorter, then the program should run faster. If only a single line is changed, only this line should get recomputed9 while rerunning the program over the same data. When compiling, the compiler should target not any given machine but a cloud of computing resources, delivering automatic data-driven parallelization. Envision goes a long way towards delivering all these properties by default with no coding effort whatsoever. In constrast, it does require a massive usage of specialized libraries in Python to even start approximating such properties.

Transparent upgrade: State-of-the-art is an ever moving target as far as software is concerned. In 2010, the best machine learning toolkit was SciPy (arguably). In 2013, it was scikit. In 2016, it was Tensorflow. In 2017, it was Keras. In 2019, it was PyTorch. There is a saying in software engineering that you can date the year of birth of any given software project by looking at its software stack and its dependencies. Indeed, as soon as you roll out your own Python scripts, you will onboard multiple dependencies that may not age well. In contrast, with Envision, we are extensively levering automated code rewrites10, to keep the “legacy” scripts up-to-date with an ever-changing language.

Packaged stack: Python scripts can’t live in a vacuum11. The code needs to be versioned (e.g. Git) with access rights (e.g. GitHub). They need an environment to run which is not your machine (e.g. a Linux VM in the cloud). A scheduler is needed to orchestrate the data pipeline (e.g AirFlow). A distributed columnar storage layer is needed for data preparation (e.g. Spark). A machine learning toolkit is needed for predictive analytics (.e.g TensorFlow). An optimization toolkit is needed to deal with supply chain combinatorial problems (e.g. GLPK). The raw results need to be exposed somewhere for later consumption (e.g. SFTP server). Fellow supply chain practitioners need to be able to monitor what’s going on (e.g. a web user interface). Access rights must be enforced (.e.g. Active Directory), etc. Envision streamlines all of this into a single meta-app, removing the burden of assembling dozens of software pieces to deliver even the most basic app.

Then, while it’s an excellent language, Python is not beyond reproach:

  • Compute performance is bad, and it’s an uphill battle to pipe every single calculation through the right library (e.g. NumPy) to avoid getting an abysmally bad performance at data-crunching tasks. Furthermore, using multiple libraries tends to create a lot of friction when data has to be moved from one to the next.
  • Memory performance is bad as well, and in particular, the reference-counting garbage collection of Python is dated - and all the more recent programming languages like Java, C# or JavaScript are using tracing instead .When dealing with memory intensive tasks over with big data, this hurts.
  • The package management in Python has been a mess for a long time, and it takes a package specialist to get it right. Also, this problem has been compounded by high-friction language upgrades .
  • Most of the (few) correctness checks only happen at runtime, when the program is executed, which is a source of endless frustration when data crunching is concerned. Obvious problems only manifest themselves after multiple minutes of execution, lowering the productivity.

In conclusion, while Python is awesome (it is), it’s not a satisfying answer for supply chain optimization. Building and maintaining a production-grade machine learning app in Python is very much possible, but costs are significant, and unless your company is prepared to have at least a small software engineering team dedicated to the maintenance of this app, the whole thing is not going to deliver satisfying results for your supply chain.

Developing Envision, a domain-specific programming language dedicated to predictive supply chain optimization wasn’t our first choice. It wasn’t even our 10th choice. It was more like the only working solution we had after exhausting a long list of more conventional alternatives over five years. Seven years later and many client companies later, each new client still manages to surprise us, one way or another, with yet another twist in their supply chain, which we would never have managed to embrace with a classic enterpriseware approach. Programmability was needed, but Python wasn’t the solution we needed.


  1. Back in 2013, I was still under the impression that it was possible to deliver a satisfying app for supply chain optimization. It was actually the confrontation with pricing challenges which somehow forced our hand, and steered Lokad down the path of building its own domain-specific language. This language was only intended for pricing optimization at first, but quickly we realized that this approach was also exactly what was needed for supply chain optimization. ↩︎

  2. The idea that delivering superior forecasting accuracy would, by itself, lead to superior supply chain performance was probably one of the biggest misconceptions I had while founding Lokad. See this Lokad TV episode for a saner perspective on this matter. ↩︎

  3. Most of the issues we identified with Python had nothing to do with Python per se, which is a great programming language, but merely the fact that Python is a generic programming language. ↩︎

  4. Back in 2013, Python had not yet reached the dominance it gained in the few years that followed in the machine learning field, R was still a strong contender, however, SciPy and NumPy, two excellent libraries, were already around and striving at the time. ↩︎

  5. Check out this excellent talk Stop Writing Classes from Pycon 2012. Even seasoned software engineers tend to get it wrong. ↩︎

  6. Software engineering as opposed to computer science. The former is all about ensuring production-system, while the latter is about cracking hard problems, such as undercovering faster algorithms. ↩︎

  7. Unfortunately, in matters of code security, there is little or no substitute for systematic peer reviews of the code. ↩︎

  8. Due to the halting problem, the attentive reader might infer that Envision is not a Turing complete language. Indeed, Envision is not. ↩︎

  9. Envision relies on diff between compute graphs, and tries to minimize the amount of recomputes between incremental changes in order to allow very fast prototyping over large datasets. However, depending on the situation, changing a single line may require recomputing the entire script. ↩︎

  10. Automated code rewrites are exceedingly difficult for a generic programming language. Unless the programming language and its whole standard library have been precisely engineered with this requirement in mind, automated upgrade tools do little in practice. When designing Envision, we knew that we were going to get tons of things wrong (and we did), and thus we paid a lot of attention to make sure that the language was particularly suitable for automated rewrites. To date, we have operated over 100 incremental rewrites since the inception of Envision. ↩︎

  11. Ironically, “Batteries Included” is one of Python’s mottos. However, the sheer amount of glue that is needed to bring together all the elements required to build an app intended for predictive supply chain optimization is daunting. ↩︎