### Joannes Vermorel

The first principle of our Quantitative Supply Chain manifesto states that *all futures should be considered*. Thus, we expanded Envision two years ago to natively work with random variables. This probabilistic algebra is the cornerstone of our way of dealing with uncertain futures.

Then, the *second* principle states that *all feasible decisions should be considered*, e.g. quantities to be purchased from suppliers. Yet, while those decisions are not *random variables*. the quantities associated to those decisions are *undecided* not *uncertain*. Our probabilistic algebra was not sufficient by itself to properly reflect those yet-to-be-made decisions.

Thus, last year, we silently and gradually rolled out a complementary algebra: the algebra of zedfuncs. A zedfunc is a datatype in Envision intended to reflect economic rewards or losses associated to quantified decisions. The main trick is that a zedfunc does not compute the outcome for *one* decision, but for **all decisions**; e.g. all the rewards from triggering a production for 1 unit up to an infinity ^{1} of units.

By combining ranvars and zedfuncs, it is possible to cope with vicious supply chain complications such as price breaks with minimal effort. The zedfuncs are an essential ingredient to a Quantitative Supply Chain optimization in order to produce prioritized lists of decisions, where all the feasible decisions are ordered by decreasing return on investments.

As a minor downside, zedfuncs are quite demanding in terms of computing resources. They’re the sort of advanced numeric datatypes that simply don’t fit into your average spreadsheet. Fortunately for us, Lokad transparently distributes the workload over a fleet of machines obtained from our favorite cloud computing platform and, overall, raw processing power has never been cheaper. Thus, in practice, dealing with hundreds of millions of zedfuncs remains a non-challenge for the Envision back-end. The present design of the Lokad zedfuncs took us some serious efforts, and we went through a complete rewrite of our first attempt. The crux of the challenge was our own lossy compression algorithm used for zedfuncs - a necessary component to keep the memory footprint of zedfuncs under control - which was not good enough. More precisely, our first version was not retaining enough numerical precision where it mattered the most. The second version got it right though thanks to the insights we had gathered in our production systems.

As probabilistic demand forecasting is gaining traction among supply chains, many solutions are still only scratching the surface when it comes to making the most of these newer perspectives. Proper tools are essential. Generating billions of probabilities is (somewhat) easy; turning those into the decisions that maximize your ROI now is a a lot more challenging. That’s what zedfuncs are for.

Zedfuncs are the sort of invaluable tool that probably won’t ever make it into any of the many RFPs we receive; and yet, most of the *Quantitative Supply Chain* can’t be achieved with those zedfuncs - or their better (future) alternatives.

- As the Lokad team hasn’t discovered - yet - a way to get quantum mechanics to perform an infinite amount of calculations in a finite amount of the time; we are using
*numerical tricks*to restrict the calculations where the results actually matter from an economic perspective.^{[return]}