Forecasting 4.0 with Probabilistic Forecasts

Published on by Joannes Vermorel.

A little over one year ago, we unveiled quantile grids as our 3.0 forecasting technology. More than ever, Lokad remains committed to delivering the best forecasts that technology can produce, and today, our 4th generation of forecasting technology, namely our probabilistic forecasting engine, is live and available in production for all clients. This new engine consists of a complete rewrite of our forecasting technology stack, and addresses many long-standing challenges that we were facing.

True probabilities

The future is uncertain no matter how good the forecasting technology. Back in 2012, when Lokad first ventured into the depths of quantile forecasting, we quickly realized that uncertainty should not be dismissed like it's done with the classic forecasting approach, but should rather be embraced. Simply put, supply chain costs are concentrated at the statistical extremes: it's the surprisingly high demand that causes stock-outs, and the surprisingly low demand that causes dead inventory. In the middle, supply chain tends to operates quite smoothly.

With quantile grids, Lokad was delivering a much more fine-grained vision of possible future outcomes. However, as the name suggests, our quantile grids were built on top of our quantile forecasts, multiple layers of quantiles actually. These quantile grids proved to be tremendously useful over the last year, but while our forecasting engine was producing probabilities, internally, nearly all its logic was not working directly with probabilities. The probabilities we computed were a byproduct of a quantile forecasting system.

Because of these quantile roots, our forecasting engine 3.0 had multiple subtle limitations. And while most of these limitations were too subtle to be noticed by clients, they did not go ignored by Lokad’s R&D team. Thus, we decided to reboot our entire forecasting technology with a true native probabilistic forecasting perspective; and this was the start of the forecasting engine 4.0.

Lead time forecasting

Lead times are frequently assumed to be a given. However, while past lead times are known, future lead times can only be estimated. For years, Lokad had under-estimated the challenge of accurately approximating the future lead times. Lead times are subtle: most statistical patterns, such as seasonality (and the Chinese New Year in particular), which impact the demand, also impact the lead time.

In our forecasting engine 4.0, lead times have become first-class citizens with their own lead time forecasting mode. Lead times now benefit from dedicated built-in forecasting models. Naturally, with our engine being a probabilistic forecasting engine, lead time forecasts are a distribution of probabilities associated with an uncertain time period.

Integrated demand forecasting

Lead times vary, and yet, our forecasting engine 3.0 was stuck with fixed lead times. From a traditional perspective, the classic safety stock analysis assumes that lead time follows a normal distribution, while nearly all measurements we have ever carried out indicate that varying lead times are clearly not normally distributed. While our experiments routinely showed that having a fixed lead time was better than having a flawed model, being stuck with static lead times was nevertheless not the perfectly satisfying solution we were looking for.

The forecasting engine 4.0 introduces the concept of integrated demand forecasting, with integrated signifying integrated over the lead time. The engine takes a full distribution of lead time probabilities, and produces the corresponding probabilistic demand forecast. In practice, the lead time distribution is also computed by the forecasting engine as seen previously. Integrated demand forecasting finally brings a satisfying answer to the challenge of dealing with varying lead times.

New products forecasting

Forecasting demand for new product is plain hard. Since, in this case, forecasting obviously cannot rely on the sales history, the forecasting engine has to rely on other data known about the product prior to its launch. Our forecasting engine 3.0 already had a tags framework, precisely geared towards this specific use case. However, tags were unfortunately not carrying as much information as we would have liked and some accuracy was left on the table.

With 4.0, this specific challenge is revised with the introduction of categories and hierarchies. Categories and hierarchies are more expressive as well as more structured than tags, and convey a lot more information. The forecasting engine 4.0 takes the full advantage of this richer data framework to deliver more accurate forecasts, with new-product forecasting being the most acute use case.

Stock-outs and promotions

The intent of the forecasting engine is to forecast the future demand. However, our knowledge of past demand is typically imperfect, with only past sales really being known. Sales typically tends to be a reasonable approximation of the demand, but sales come with multiple biases, the most common cases being stock-outs and promotions. Our engine 3.0 already had a few heuristics to deal with this bias, plus quantile forecasts are intrinsically more robust than (classic) average forecasts. Yet, once again, the situation was not entirely satisfying for us.

The engine 4.0 introduces the notion of biased demand, which can be either censored or inflated. When the demand for a given product on a given day is marked as censored, we are telling the forecasting engine that the demand should have been higher, and that the true demand for that day remains unknown. The engine leverages this information to refine the forecasts, even when the history is full of events which have distorted the demand signal.

Ultra-sparse demand

While quantile forecasts are vastly superior to classic average or median forecasts when it comes to estimating the probabilities of rare events, quantiles begin demonstrating their limits when it comes to estimating super-rare events. For example, our quantile models were struggling to properly deal with items sold only once or twice a year, as well as handling service levels higher than 98%.

Native probabilistic models, as implemented in our engine 4.0, are much better behaved when it comes to ultra-sparse demand and “rare” events in general. These models could have been implemented within a quantile forecasting framework (a probabilistic forecast can be easily turned into a quantile forecast); but our engine 3.0 did not have the infrastructure to support them. So they were implemented into the engine 4.0 instead.

Blended into Envision

Versions 2.0 and 3.0 of our forecasting engine came with a web user interface. At first glance, it seemed easy. However, the user interface was actually dismissing the factor which represents the true challenge of using (any) forecasting engine, which is to provide complete control of the data transferred into the forecasting engine. Indeed, garbage-in, garbage-out remains an all too frequent problem.

The engine 4.0 is interfaced from within Envision, our domain-specific language geared towards quantitative optimization for commerce. Calling the forecasting engine takes a series of data arguments provided from an Envision script. This approach requires a bit more upfront effort, however, the productivity benefits kick in rapidly; as soon as adjustments are made on the input data.

The release of our forecasting engine 4.0 is only the first part of a series of important improvements that have been brought to Lokad over the last few weeks. Stay tuned for more.

Categories: Tags: release forecasting No Comments

Autocomplete file paths with Envision

Published on by Joannes Vermorel.

When data scientists work with Envision, our domain-specific language tailored for quantitative optimization for commerce, we want to ensure that they are as productive as possible. Indeed, data scientists don't grow on trees, and when you happen to have one available, you want to make the most of his time.

A data analysis begins by loading input data, which happens to be stored as flat files within Lokad. Therefore, an Envision script always starts with a few statements such as:

read "/sample/Lokad_Items.tsv"
read "/sample/Lokad_Orders.tsv" as Orders
read "/sample/Lokad_PurchaseOrders.tsv" as PurchaseOrders

While Envision syntax is compact and straightforward, file names may, on the other hand, be fairly complex. From the beginning, our source code editor had been released with autocompletion, however until recently, autocompletion was not providing suggestions for file names. A few days ago, the code editor was upgraded, and file names are now suggested as follows:

This feature was part of a larger upgrade which also made the Envision code source editor more responsive and more suitable for dealing with large scripts.

Categories: Tags: envision release features No Comments

Proofs of concept don’t work in quantitative supply chain optimization

Published on by Joannes Vermorel.

Proofs of concept are one of the most frequent requests we get from our prospect clients looking to try out our supply chain optimization service. Yet, we frequently decline such requests; first because they hurt client’s company itself, and second because they also hurt Lokad in the process. Since POCs – or proofs-of-concept – are so widespread in B2B software, it is usually hard to grasp why they can be downright harmful in the specific case of quantitative supply chain optimization (1). In this post, we gather our findings on POCs, considering them to be a supply chain “anti-pattern".

POCs do not cost less

One core assumption behind the POC methodology is that POCs cost less than the real thing. Unfortunately, this assumption is nearly always incorrect.

First, establishing a small scope within an entire supply chain network only barely moves the needle. In the past, software vendors struggled with scalability problems and actual full-scale deployments did typically require heavy upfront hardware investments, potentially bundled with software licenses such as databases. Without these investments, it was not even possible to start processing data. Yet, in today’s age of cloud computing, this constraint does no longer exist, and if an app is designed correctly, nothing extra is required to start processing data. The cloud computing bill will increase only marginally for every additional client, but all in all, this cost is negligible compared to, say, the costs involved in establishing a discussion with the prospect. Second, the bulk of initial efforts consists of qualifying data, followed by a proper identification oed in establishing a commercial B2B relationship with the client.

Worse still, having more data typically makes things easier, not harder, whenever statistical forecasting is involved. Therefore, by restricting the data scope, POCs tend to make things more difficult, and hence more costly, when compared to addressing the full scope of challenges. Our experience indicates that even when POCs focus on only 5% of the entire supply chain network, these 5% typically involve almost the entire complexity of the network as a whole. Actually, it is precisely because POCs embed nearly all the complexity of a full-scale project, that POCs would be expected to make sense in the first place.

Dismissing the complexity is indeed not an option. If your supply network includes container shipments and working with unreliable suppliers, how could a POC possibly be convincing if these elements are not been factored into the initiative? If any specific constraint is ignored, such as MOQs (minimal order quantities), the numerical results end-up unusable.

The costs beyond the POC are driven by the efforts to be put on both sides, both by Lokad and its client, in managing the full complexity of the supply chain. Those costs are driven by the specificities of the business being considered, the scale having only a marginal impact on costs.

POCs increase the odds of failure

When opting for a POC, companies frequently end up trying stuff to improve their supply chain. However, in this specific case, I would like to quote Yoda, Do. Or do not. There is no try. Despite the claims of software vendors, optimizing supply chain is hard. The problem with POCs is that is gives too much leeway for parties to fail.

  • Extracting sales history is hellishly complicated. Alas, there is no alternative anyway: one will never succeed at optimizing supply chain without data representing the demand.
  • Electronic stock levels are inaccurate. Technology can help auto-detect the most obvious deviations, and help prioritize recounts. However, it is not uncommon for, supply chain managers to deal with phantom inventory too.
  • Forecasts remain poor no matter what. Businesses should learn to embrace an uncertain future, instead of wishing this uncertainty away. Probabilistic forecasts are particularly good at capturing future uncertainty.

Complications are as many excuses to drop the ball.

There are situations where solutions are expected to be easy and uneventful: creating a new email account for a new employee for example. However, optimizing supply chain is nearly always difficult: if the company has been around for more than a few years, the “easy” part of supply chain optimization has already been done years ago. The “difficult” part is what remains.

In our experience, most POCs fail at the initial stages of the project, when teams are still struggling with data issues. Yet, this says nothing about the inventory optimization solution itself, because the solution is never put to the test.

POCs sidetrack supply chain optimization initiatives

POCs emphasize a viewpoint that is not exactly the production viewpoint. Executives seek benchmarks to be made or KPIs to be established. However, what if a certain KPI happens to be more difficult to compute than performing the optimization itself? What if the KPI itself, while instructive, does not offer any tractable options to improve anything?

Our experience indicates that POCs routinely get sidetracked by considerations that are simply non-requirements from a production perspective. Trying to address those requirements typically poisons the POC because suddenly the POC actually becomes an even greater challenge than the production itself.

Also, as the main point of a POC is to seek reassurance, most POCs suffer from gold plating) anti-patterns where the client company pressures the vendor to be all inclusive in capturing every single aspect of their business, even at the expense of the overall reliability of the solution. The resulting solution is often too brittle to be of any use from a production perspective.

We have seen many POCs fail on “imaginary” problems as well. For example, if the best forecasting model, empirically tested over thousands of SKUs, happens to be non-seasonal and outperforms all other available seasonable models, should this be considered a problem? There is no question whether the business in question is seasonal: it is. But what if the best known way to anticipate future demand is to merely ignore seasonality in this case? Should this be considered a problem? In our experience, this single “problem” has been considered a blocking issue for many POCs while supply chain practitioners themselves were admitting that the ultimate purchase order quantities suggested were sound.

Go for production and revise project if needed

POCs are usually and rightfully perceived as distractions by practitioners who need to keep the business running while the next-gen solution is coming. Our experience indicates that going straight for production is cheaper and less risky. However, this should be done with the proper methodology.

First, failing due to “logistics of data” is not an option. You can’t optimize what you don’t measure. If data is meaningless, then all optimization attempts will be meaningless too. Success is a requirement since otherwise the company may no longer exist a few years from now. It happens that the vast majority of efforts to be invested are associated with this logistics of data, and this investment can be nearly fully separated from the solution being considered for production. And this is a good thing! If the optimization solution was for some reason falling short, the investment is not lost and merely needs to be redirected to a better alternative solution.

Second, while the goal is to shoot straight for production, it does not mean that numbers go unchallenged, quite the opposite. The old and new process should coexist, picking as many low hanging fruits as possible from the old process (2) while the new one gets polished.

Then, dozens of issues typically arise. It is important to sort them out:

  • problems that were already impacting the old process, albeit in a more silent way. Good processes and good technologies make problems obvious; this is not a defect but a virtue.
  • problems that can’t be fixed by the software being deployed. If the SKU picking is unreliable in the warehouse, don’t expect the demand forecasting module to make it trustworthy.
  • mismatch between real problems vs. expectations. Statistical forecasting is deeply counter-intuitive; don’t let your expectations override what quantitative measurements tell you.
  • design issues that can’t be solved without significantly redesigning the solution, which usually happens when the software does not have the right angle to tackle the challenge.

The last point requires another solution to be considered. However, as mentioned above, this should not be the end of the initiative, merely the beginning of a collaboration with another vendor.

Abandoning the idea of a POC usually means losing the entire momentum that has been invested in the initiative. Furthermore, most POCs fail for the wrong reasons, which means that the odds of success of future attempts will barely be improved as the real challenges remain mostly untouched.

Going straight for production is actually less risky than it sounds. It helps prevent an entire class of failures that tend to be ignored in the case of POCs, while they should not be. It forces the initiative to adopt a narrow focus on what is actually needed to obtain improvements, and to put wishful thinking aside. When facing a serious vendor failure, a company can still capitalize on its internal momentum and switch to another vendor, without losing the said momentum as it usually happens with POCs.

(1) There are many ways to optimize supply chain: better processes, better suppliers, better transporters, better hiring … This post focuses on quantitative optimization: supply chain challenges that can be addressed through statistical forecasts and/or numeric solvers.

(2) Fixing phantom inventory is of benefit to all inventory optimization processes. The same is true for revisiting and improving inventory valuations.

Categories: Tags: insights business No Comments

Q&A about inventory optimization software

Published on by Joannes Vermorel.

Under the supervision of Prof. Dr. Stefan Minner, Leander Zimmermann and Patrick Menzel are writing a thesis at the Technical University of Munich. The goal of this study is to compare inventory optimization software. Lokad did receive their questionnaire, and with the permission of the authors, we are publishing here both their questions and our answers.

1. When did you introduce your optimization software to the market?

Lokad was launched in 2008, but as a pure demand forecasting solution at the time. We started to do end-to-end supply chain optimization in 2012.

2. For which company sizes is your software suitable?

We have clients ranging from 1-man companies to companies over 100,000 employees. However, below 500k€ worth of inventory, the statistical optimization of the supply chain is frequently not worth the effort.

3. For a midsized company of around 50-250 employees and for sales of around 10-25 million euros per year. What would be the price of your standard software package?

This would be our Premier package at $2500 / month. However, the package covers a lot more than just software. Pure software is only 1/5th of our fees or so.

The bulk of the fee goes into paying a data scientist at Lokad who manage the account, leveraging our technology stack to get the final results. That's what we call an inventory optimization as a service.

4. Is your software suitable for different industries? (e.g. pharmacy, metal, perishable goods, …)

Yes, we support diverse verticals from aerospace to fashion with fresh food in the middle. However, our software is primarily a programmatic toolkit tailored for quantitative supply chain optimization. While we do address many verticals, it usually takes a data scientist to craft the finalized solution.

5. What characteristics of your software differentiate you from other optimization software? (Unique selling proposition)

Classic forecasts, and by extension the classic inventory optimization theory, work poorly, surprisingly poorly even. It took Lokad years to realize that the main challenge - statistically speaking - was related to the extreme cases and that is what costs money in reality. Lokad delivers probabilistic forecasts. Whenever inventory is involved, probabilistic forecasts are just better than the classic ones.

6. For which computer platforms is your software applicable? (e.g. Microsoft, Apple, Linux, …)

Lokad is a SaaS (webapp) built on top of a cloud computing platform (Microsoft Azure). Our clients are very diverse. However, in supply chain, there are still more IBM Mainframes out there than OSX setups.

However, without a cloud computing platform, it would be very impractical to run the machine learning algorithms that Lokad routinely leverages. Thus, our software is not designed to run on premise.

7. Does your company provide standardized or personalized software solutions?

Tricky question and subtle answer.

Lokad delivers a packaged platform. We are multi-tenant: all our clients run on the same app. In this respect, we are heavily standardized.

Yet, Lokad delivers a domain-specific language called Envision. Through this language, it's possible to tailor bespoke solutions. In practice, most of our clients benefit from fully personalized solutions.

Lokad has crafted a technology intended to deliver personalized supply chain solutions at a fraction of the costs usually involved with such solutions by boosting the expert's productivity.

8. If it is a standardized software, which features are included in the standard package of your software?

We have over 100 pages worth of documentations. For the sake of concision, they won't be listed there.

9. Are there add-ons available? If yes, which? (e.g. spare parts, …)

We don’t have add-ons in the sense that every single plan - even our free plan – include all features without restriction.

10. For which stages/levels can your software optimize inventory management? (e.g. factory, warehouse, supplier, …)

We cover pretty much all supply chain stages - warehouses, point of sales, workshops – both for forward and reverse logistics.

11. Is your software solving the problems optimally or heuristically?

Computer Science tells you that nearly every non-trivial numerical optimization problem can only be resolved approximately. Even something as basic as bin packing is already NP-complete, and bin packing is far from being a complex supply chain problem.

Many vendors - maybe even Lokad (I try hard to resist to marketing superlatives) - may claim to have an "optimal" solution, but, at best, this should be considered Dolus Bonus; aka an acceptable lie, akin to TV ads boasting unforgetable experience or similar semi-ridiculous claims.

I advise to check my earlier post about top 10 lies of forecasting vendors. Any vendor who would seriously claim to deliver an "optimal" solution - in the mathematical sense - would either be lying or delusional.

12. Which algorithms is your software using? (e.g. Silver-Meal, Wagner-Within, ...)

Both Silver-Meal and Wagner-Within come from the classic perspective where future demand cannot be expressed as arbitrary non-parametric distributions of probabilities. In our book, those algorithms fail at delivering satisfying answers whenever uncertainty is present.

Lokad is using over 100 distinct algorithms, most of them having no known name in the scientific literature. Specialization is king. Most of those algorithms are only new/better in the sense that they provide a superior solution to a very narrow class of problems - as opposed to generic numeric solvers.

13. Where are the limits in terms of input quantities which can be calculated at once? (e.g. size of cargo, different products, period of time, …)

The numerical limits of our technology are typically ridiculously high compared to the actual size of the supply chain challenges. Ex: no more than 2^32 SKUs can be processed at once. Through cloud computing, we can tap nearly unbounded computing resources.

That being said, unbounded computing resources also imply unbounded computing costs. Thus, while we don’t have hard limits on data inputs or outputs, we pay attention to keep those computing costs under control, adjusting the amount of computing resources to the scale of the business challenge to be addressed.

14. How many variables can be chosen and how many are given? (e.g. degree of service, period of time, Lot size, ...)

Lokad is designed around “Envision” a domain-specific programming language dedicated to supply chain optimization. This language offers programmatic capabilities, hence again hard limits are so high they are irrelevant in practice. Our language would not support more than 2^31 variables for example.

However, dealing with more than 100 heterogenous variables at once would already be an insanely costly undertaking from a practical perspective: each variable needs to be qualified, fed with proper data, properly adjusted to fit into the bigger model, etc.

15. Does your inventory management support multiple supply chains for one stock?

Yes. There might be multiple sources AND multiple consumers for a given stock. Inventory can be serial too: each unit of stock may have some unique properties influencing the rest of the chain. This situation is commonly found in aerospace for example.

16. If yes, can those supply chains be prioritized/classified? (e.g. ABC/XYZ products)

Yes. However, prioritization is usually more expressive than classification. We strongly discourage our clients from using ABC analysis, because a lot of valuable information gets lost through such a crude classification.

17. Which method of demand forecasting is implemented? (e.g. moving average, exponential smoothing, Winter’s Method, …)

Moving average, exponential smoothing, Holt and/or Winter’s methods, all those methods produce classic forecasts – aka average or median forecasts. Those forecasts invariably work poorly for inventory optimization because they can’t capture a truly stochastic vision of the future. Plus, as a separate concern, they can’t correlate demand patterns between SKUs either.

Being the counterpart of constrained optimization (detailed above), Lokad has also over 100 algorithms in the field of statistical forecasting. Most of those algorithms have no well-known name in the literature either. Yet, again, specialization is king.

18. How many past periods are considered to calculate the future demand?

The idea that past demand should be represented as periods is mostly wrong. The granularity of the demand is important: 10 clients ordering 1 unit each is not the same thing than 1 client ordering 10 units at once. Our algorithms are typically not based on periods.

Then, in terms of depth of the history, our algorithms typically try to leverage all the history available. In practice, it’s rare that looking further than 10 years back yield any gain in the future forecasts. So there is no hard limit, it’s just that the past fades into numerical irrelevance.

19. Is the seasonal change in demand included in the forecast? (yes/no)

Yes. However, seasonality is only one of the cyclicities that exist in the demand: day of week and day of the month are also important, and also handled. Then, we have also made recent progress on quasi-seasonality: patterns that don’t exactly fit the Gregorian calendar such as Easter, Chinese New Year, Ramadan, Mother’s day, etc.

20. What kind of performance measures can be analyzed? (e.g. waiting time, ready rate, non-stockout probability, degree of service, …)

As long as you can write a program to express your metric, it should be feasible with Lokad. Yet again, Lokad offers a domain-specific programming language, so we are flexible by design. In the end, there is one metric to rule them all: the dollars of error.

21. Does your software support the implementation of penalty costs? (e.g. cost for “out of stock”, “capacity limits reached”, …)

Yes, it's one special case of the many business drivers that we take into account. Those penalties can take many numerical shapes: linear or not, deterministic or not, etc.

22. Which are your three strongest competitors in your market segment?

Excel, Excel and Excel. Number 4 is pen+paper+guesswork.

23. Do you have a list of companies (mid-size to large-size) using your software?

See our customer's page.

Categories: Tags: insights software No Comments

Uploading very large files through the web

Published on by Joannes Vermorel.

The web was not really intended to transfer giganormous files. In order to do that, there are other (older) protocols like FTP (the File Transfer Protocol) or its secure alternative FTPS and SFTP. Lokad was already supporting many file receiving options, including web uploads. However, until today, our web uploads were restricted to files weighing less than 200MB.

Today, we have released a new version of our web upload features, and it's now possible to upload arbitrarily large files through your favorite web browser into Lokad. Our web upload component is smart enough to perform retries, so if your internet connection faces glitches midway, it will not restart the upload from scratch, but resume the transfer.

While uploading a 10GB flat file through your web browser might not be a very practical option when operating in production, it can be very handy to quickly get started with Lokad; especially if you're not too comfortable with FTP clients like FileZilla.

Ps: we never pushed any official announcement, but we have been supporting public key authentication for SFTP for a while as well.

Categories: Tags: bigfiles data release No Comments

Joining tables with Envision

Published on by Joannes Vermorel.

When it comes to supply chain optimization, it’s important to accommodate the challenges while minimizing the amount of reality distortion that get introduced in the process. The tools should embrace the challenge as it stands instead of distorting the challenge to make it fit within the tools.

Two years ago, we introduced Envision, a domain-specific language, precisely intended as a way to accommodate the incredibly diverse range of situations found in supply chain. From day 1, Envision was offering a programmatic expressiveness which was a significant step forward compared to traditional supply chain tools. However, this flexibility was still limited by the actual viewpoint taken by Envision itself on the supply chain data.

A few months ago, we have introduced a generic JOIN mechanism in Envision. Envision is no more limited by natural joins as it was initially, and offers the possibility to process with a much broader range of tabular data. In supply chain, arbitrary table joins are particularly useful to accommodate complex scenarios such as multi-sourcing, one-way compatibilities, multi-channels, etc.

For the readers who may be familiar with SQL already, joining tables feels like a rather elementary operation; however, in SQL, combining complex numeric calculation with table joins rapidly end up with source code that looks obscure and verbose. Moreover, joining large tables also raises quite a few performance issues which need to be carefully addressed either by adjusting the SQL queries themselves, or by adjusting the database itself throught the introduction of table indexes.

One of the key design goals for Envision was to give up on some of the capabilities of SQL in exchange of a much lower coding overhead when facing supply chain optimization challenges. As a result, the initial Envision was solely based on natural joins, which removed almost entirely the coding overhead associated to JOIN operations, as it is usually done in SQL.

Natural joins have their limits however, and we lifted those limits by introducing the left-by syntax within Envision. Through left-by statements, it becomes possible to join arbitrary tables within Envision. Under the hood, Envision takes care of creating optimized indexes to keep the calculations fast even when dealing with giganormous data files.

From a pure syntax perspective, the left-by is a minor addition to the Envision language, however, from a supply chain perspective, this one feature did significantly improve the capacity of Lokad to accommodate the most complex situations.

If don’t have a data scientist in-house that happens to be a supply chain expert too, we do. Lokad can provides an end-to-end service where we take care of implementing your supply chain solution.

Categories: Tags: envision technical release No Comments

Insights on the Lokad tech evolution

Published on by Joannes Vermorel.

The technology of Lokad has evolved so much that people who have had the chance to trial Lokad even two years ago would barely recognize the app as it stands today.

The "old" Lokad was completely centered around our forecasting engine - i.e. what you can see as a forecasting project in your Lokad account today. As a result, our forecasting engine gradually gained tons of features not even remotely related to statistics. About two years ago, our forecasting engine had become a jack-of-all-trade responsible for almost everything:

  • data preparation with the possibility to accommodate a large diversity of data formats
  • reporting analytics with a somewhat complex, and somewhat flexible, Excel forecasting report
  • scheduled execution through a webcron integration or through the API

Then, during the last two years, we have gradually introduced stand-alone replacements for those features that now live outside our forecasting engine. However, calling those new features mere replacements is unfair, because those replacements are vastly more powerful than their original counterparts.

  • We can now process very diverse files, varying in size, in complexity and even in data formats. Plus, we have many data connectors too.
  • The capabilities of our old Excel forecasting report are dwarfed by the newer reporting capabilities of Envision.
  • Scheduling and orchestration are now first-class citizens which encompasse also the data retrieval from other apps.

Because those new features are plainly superior to the old ones, we are gradually phasing out the cruft, that is, phasing out all the non-forecasting related things that still live inside our forecasting engine.

In order to keep the process smooth, we are gradually - but actively - migrating all our clients from the old Lokad to the new Lokad; and when an old feature isn't used anymore, we remove it entirely.

The old Excel forecasting report is a tough case for us. The challenge is not to merely duplicate the report itself within Envision (that alone isn't hard at all) - the challenge is that the underlying thinking that went into this report is now fairly outdated. Indeed, over the years, Lokad has introduced better forecasting technologies - the latest iteration being probabilistic forecasts - which cannot be made to fit within this report. By design, this one report is stuck with a legacy approach to forecasting, which unfortunately is not such a good fit as far as inventory optimization is concerned.

In contrast, combining probabilistic forecasts with business drivers does require more efforts both on the Lokad side and the client side, but business results simply don't compare. The former is about optimizing percents of error while the later optimize dollars of error. Unsurprisingly, once our clients realize how much money they leave on the table not doing the later, they never consider going back to the former.

Then, our data integrations are currently undergoing a similar, and no less radical transformation. When we started developing data connectors, we tried to fit all the data we were retrieving into the framework established by our forecasting engine; that is producing files such as Lokad_Items.tsv, Lokad_Orders.tsv, etc. This approach was initially appealing because it was forcing a normalization on the data retrieved and processed by Lokad.

Unfortunately, this abstraction - like all abstractions - is leaky. All apps don't agree on what exactly is a product or an order; there are tons of subtle differences to be accounted for, and it was simply not possible to accommodate all the business subtleties through some kind of data normalization.

Thus, we have started to take the data integration challenge from another angle: retrieve the app data while preserving as much as possible the original structures and concepts. The main drawback of this approach is that it requires more initial efforts to get results because the data is not transformed upfront to be compatible with all the default expectations of Lokad.

However, because the data doesn't suffer misguided transformations, it also means that Lokad does not get stuck into not being able to accommodate business subtleties because they don't fit the framework. With some programmatic glue, we accommodate the business needs down to the minute details.

Similarly to our old Excel report, the transition toward native data - as opposed to normalized data - follows our experience which indicates that investing a little more in getting the number aligned with the business yields a lot more results.

Categories: Tags: insights No Comments

Stitch Labs Integrated by Lokad

Published on by Noora Kekkonen.

stitchlabs-logo

The online, multichannel inventory management software Stitch Labs is our latest integration. By combining Lokad and Stitch Labs, retailers can easily manage all the steps of inventory management - from controlling the stock levels and minimizing the stock-outs to inventory forecasting and automating the reordering process.

By connecting Lokad to Stitch Labs, all of your sales and product data can automatically be imported into Lokad. If you are already using Stitch Labs, all you need to do is to create a Lokad account and you can get started in minutes.

The Lokad team is here to help you to get started. Don’t hesitate to contact us if you have any questions.

Categories: Tags: release stitchlabs No Comments

Full automation ahead

Published on by Noora Kekkonen.

Lokad uses advanced forecasting methods in order to produce the most accurate forecasts possible, and while that accuracy is greater than with classic methods, many large reports can’t be computed instantly in real time. Executing multiple operations in a specific order and retrieving data from other apps can sometimes be time consuming, therefore, Lokad now provides an automation feature which allows the full control of all the operations needed to produce the numbers your company needs.

From simple scheduling to fully controlled sequences

Since being able to schedule operations is a must-have feature in advanced analytics, Lokad already provided this option in the project configuration, but it was quite limited and required an account on a third party scheduling service. Therefore, we have now launched a native automation feature which offers both orchestration and scheduling possibilities.

Lokad project orchestration showing multiple projects scheduled to run at 0330 UTC daily. The first step (data import) will be skipped if it has been run within the past 6 hours, and if the third step fails the sequence will continue.

Lokad project orchestration showing multiple projects scheduled to run at 0330 UTC daily. The first step (data import) will be skipped if it has been run within the past 6 hours, and if the third step fails the sequence will continue.

Orchestration and scheduling - the two pillars of advanced analytics

With the new automation feature, you can define a specific order for running projects. In this way, the updated data from previous runs can be applied to other projects, as the run will only start when the previous one is completed. The “skip if more recent” option is useful when dealing with long processes. For example, you can set the sequence to auto-skip one or more steps if they have already been run in the last 12 hours.

Scheduling operations allows you to have your reports ready when your company needs them - whether it is on a daily or weekly basis. Some operations require a large amount of data and their execution can take a while. Therefore, Lokad also allows you to set a specific time to start running the sequences. We particularly suggest running projects during the night. In this way you will always have your numbers ready in the morning, without waiting.

Categories: Tags: release technology No Comments

Solving the general MOQ problem

Published on by Joannes Vermorel.

Minimal Order Quantities (MOQs) are ubiquitous in supply chain. At a fundamental level, MOQs represent a simple way for the supplier to indicate that there are savings to be made when products are ordered in batches rather than being ordered unit by unit. From the buyer's perspective, however, dealing with MOQs is far from being a trivial matter. The goal is not merely to satisfy the MOQs - which is easy, just order more - but to satisfy the MOQs while maximizing the ROI.

Lokad has been dealing with MOQs for years already. Yet, so far, we were using numerical heuristics implemented through Envision whenever MOQs were involved. Unfortunately, those heuristics were somewhat tedious to implement repeatedly, and the results we were obtaining were not always as good as we wanted them to be - albeit already a lot better than their "manual" counterparts.

Thus, we finally decided to roll our own non-linear solver for the general MOQ problem. This solver can be accessed through a function named moqsolv in Envision. Solving the general MOQ problem is hard - really hard, and under the hood, it's a fairly complex piece of software that operates. However, through this solver, Lokad now offers a simple and uniform way to deal with all types of MOQs commonly found in commerce or manufacturing.

Categories: Tags: insights supply chain envision No Comments

The Stock Reward Function

Published on by Joannes Vermorel.

The classic way of thinking about replenishment consists of establishing one target quantity per SKU. This target quantity typically takes the form of a reorder point which is dynamically adjusted based on the demand forecast for the SKU. However, over the years at Lokad, we have realized that this approach was very weak in practice, no matter how good the (classic) forecasts.

Savvy supply chain practitioners usually tend to outperform this (classic) approach with a simple trick: instead at looking at SKUs in isolation, they would step back and look at the bigger picture, while taking into consideration the fact that all SKUs compete for the same budget. Then, practitioners would choose the SKUs that seem to be the most pressing. This approach outperforms the usual reorder point method because, unlike the latter, it gives priority to certain replenishments. And as any business manager would know, even very basic task prioritization is better that no prioritization at all.

In order to reproduce this nice “trick”, in early 2015 we upgraded Lokad towards a more powerful form of ordering policy known as prioritized ordering. This policy precisely adopts the viewpoint that all SKUs compete for the next unit to be bought. Thanks to this policy, we are getting the best of both worlds: advanced statistical forecasts combined with the sort of domain expertise which was unavailable to the software so far.

However, the prioritized ordering policy requires a scoring function to operate. Simply put, this function converts the forecasts plus a set of economic variables into a score value. By assigning a specific score to every SKU and every unit of these SKUs, this scoring function offers the possibility to rank all “atomic” purchase decisions. By atomic, we refer to the purchase of 1 extra unit for 1 SKU. As a result, the scoring function should be as aligned to the business drivers as possible. However, while crafting approximate “rule-of-thumb” scoring functions is reasonably simple, defining a proper scoring function is a non-trivial exercise. Without getting too far into the technicalities, the main challenge lies in the “iterated” aspect of the replenishments where the carrying costs keep accruing charges until units get sold. Calculating 1 step ahead is easy, 2 steps ahead a little harder, and N steps ahead is pretty complicated actually.

Not so long ago, we managed to crack this problem with the stock reward function. This function breaks down the challenges through three economic variables: the per-unit profit margin, the per-unit stock-out cost and the per-unit carrying cost. Through the stock reward function, one can get the actual economic impact broken down into margins, stock-outs and carrying costs.

The stock reward function represents a superior alternative to all the scoring functions we have used so far. Actually, it can even be considered as a mini framework that can be adjusted with small (but quite expressive) set of economic variables in order to best tackle the strategic goals of merchants, manufacturers or wholesalers. We recommend using this function whenever probabilistic forecasts are involved.

Over the course of the coming weeks, we will gradually update all our Envision templates and documentation materials to reflect this new Lokad capability.

Categories: Tags: insights supply chain envision No Comments

Hiring a software engineer with taste for compilers and big data

Published on by Joannes Vermorel.

Lokad is growing, we are hiring again.

At Lokad we use Envision, our in-house programming language, to write data analysis scripts and adapt our forecasts to the business constraints of our customers, processing hundreds of gigabytes of data each day.

Envision is a modern, strongly typed, high-performance relational language with inspiration from SQL, Python, R and the Excel approach to column data. Its state-of-the-art compiler performs type and table inference to minimize the need for annotations and uses static analysis to optimize the execution plan and reuse cached data from previous runs, generating scripts in an intermediate language that is compiled down to CIL and allows the injection of custom C# code.

Envision, its compiler and its tooling are still growing, and we are looking for new team members to help us develop it further. You would be contributing to the core compiler code, implementing new language features and optimization modes.

You will benefit from an awesome dev team to support you and from a calm working environment (nobody works in open spaces at Lokad). You will be reporting directly to the CTO of the company.

Some experience of working on compilers, or with operational or denotational semantics, is expected (for a junior position, an university compiler project would qualify). In-depth knowledge of SQL, relational algebra or pure functional languages is a big plus. We do not require any prior knowledge of our development and production stack (C#, .NET, Visual Studio, Azure and Git).

We expect you to be fluent in English.

About us: Lokad is a software company that specializes in Big Data for commerce. We help merchants, and a few other verticals (aerospace, manufacturing), to forecast their inventory and to optimize their prices. We are profitable and we are growing fast. We are closing deals in North America, Europe and Asia. The vast majority of our clients are outside France. We are located 50m from Place d'Italie in Paris (France).

To apply: Drop an email with your resume at contact@lokad.com.

Categories: Tags: hiring No Comments

Optimizing container shipments

Published on by Joannes Vermorel.

Supply chain management has long gone global: even small businesses are now importing goods from overseas whenever they identify the right business opportunities. However, while supply chain data can flow back and forth across the globe at a fraction of the speed of light, physical goods are still mostly freighted via containers with lead times counted in weeks, if not months. On top of that, containers further complicate the task of supply chain practitioners by imposing both volume and weight constraints.

Lokad is now supporting dozens of companies to help optimize their order quantities while taking into account their container shipment constraints. Below, we review some of the most important insights that we have gathered when dealing with demand planning combined with container shipments.

The most frequently overlooked aspect of dealing with containers is probably the importance of the ordering lead times. Indeed, except for extremely large businesses, ordering in containers imposes significant waiting periods between successive orders to the same supplier. Neglecting the ordering lead times results in significant under-estimation of the demand to be covered and causes costly stock-outs. Consequently, the ordering lead time, like the supply lead time, needs to be forecasted too. This makes it not just a demand forecast, but a lead time forecast as well.

Then, the second most overlooked factor is related to how badly the constraints associated with container shipments misfit the classic ordering policies such as order-up-to-level or order-quantity. In reality, such ordering policies fail to satisfy the necessary constraints, and as a result, the ordered quantities either exceed or underuse the entire container capacity. For this reason, supply chain practitioners end up spending a lot of time doing manual corrections in order to get the quantities matching the container capacity. A much more efficient solution implies using a prioritized ordering policy, where items can be added up to the point where the container is full.

When Lokad tackles demand planning in the presence of container shipment constraints, the two primary questions we strive to address are:

  • What is the “best” composition of the next container to be ordered (which items, which quantities)?
  • What is the expected profitability of this next best container?

As long as we can address these two questions, ordering from suppliers becomes a piece of cake. All it takes is refreshing the forecasting “logic” in your Lokad account on a daily basis, and checking whether the next “best” container to be ordered reaches a certain profitability threshold; and when it does, just ordering the suggested quantities. This process is even more flexible than filling the container up to its full capacity as it is possible to consider circumstances where the most profitable containers are not filled up to 100%. In fact, it’s really up to the profitability analysis to decide whether each item is worth putting in the next container or not.

Computing precise estimations of both margins and costs requires a forecasting technology that is capable of considering a myriad of scenarios. At Lokad, we achieve this through probabilistic forecasts: we don’t forecast the average demand, but the probabilities associated to (almost) all future demand levels. Through our probabilistic forecasts, every scenario can be assessed financially and then weighted against its probability. Finally, every container’s potential composition can be assessed through its weighted average of financial outcomes, the weights being the probabilities associated with the respective demand scenarios.

The method for handling container shipments that we have just briefly described might look quite intensive as far as computations are involved. Well, it is. However, the time and expertise of supply chain practitioners is far too valuable to be “burned away” spending countless hours on tweaking Excel sheets.

This leads us to our third most overlooked aspect relating to containers: manually composing containers is a very tedious process, and this process comes at the expense of more fundamental supply chain improvements. Indeed, for small companies, we frequently notice that they could order containers more frequently. However, the process of figuring out the exact composition of containers is so time-consuming that realistically, it can’t be generally done more than once a month. In a similar vein, for larger companies, we also often notice that the opportunities to consolidate shipments from multiple suppliers shipping from the same port are also frequently dismissed not because they are impractical, but simply because this would require using a method that cannot be supported by manual processes.

As a result, in practice, manual container composition "hits" companies in two different ways: first, because the composition of the container isn’t really optimized in the first place, and second, because it consumes most of the supply chain management resources which would be better used for improving the supply chain on the whole.

Lokad’s technology makes it quite straightforward to compose optimized containers in a fully automated manner. Check out our more technical entries in case you would like to tackle the challenge yourself. In practice, our Lokad team is here to assist your company in getting it right, as containers might not be the only constraint that your company is facing: there might be minimum order quantities, warehouse storage capacity, etc.

Categories: Tags: No Comments

Retail pricing strategies

Published on by Noora Kekkonen.

Pricing strategies are an essential part of demand forecasting as prices directly influence demand. All too often companies settle for benchmarking prices, when they actually should benchmark pricing strategies. Therefore, we have extended our knowledge base with a new collection of articles about the most popular pricing methods used in retail.

Pricing concepts

At Lokad we believe in optimizing pricing strategies instead of raw prices. By ‘pricing strategies’, we are referring here to the method of computing optimized prices given the available data and the market conditions. In order to assess the quality of a pricing strategy one might refer to the price elasticity of demand, which is a popular method. However, price elasticity can be misleading as it is a limited indicator of demand.

Depending on the type of the market, retailers can choose to make short or long-time pricing strategies. A high price maximizes short-term profit, but will result in a loss of market share. A low price maximizes long-term profit because it allows a firm to gain market share. In both cases the prices are best when frequently re-evaluated. Repricing software, such as Lokad, aid in this re-evaluation by automatically recomputing prices depending on market conditions.

Most popular pricing strategies

In order to affect the buying behavior, retailers can choose from a vast range of pricing strategies. For instance one may want to increase the willingness to pay by creating product packets and using bundle pricing; or use the same prices as one’s competitors with competitive pricing; or set the prices based on the production costs and the desired level of mark-up with cost-plus pricing.

The decoy pricing method can be applied when one wishes to influence the customer with either a slightly lower product price but with a much lower quality product, or on the contrary, a much higher price with a slightly higher quality product.

One widely used method is odd pricing, which aims to maximize profit by making micro-adjustments in the pricing structure. For example, this could mean setting a price at $17,99 instead of $17. In addition to the price structure, a retailer may want to also optimize the style of the prices. For some types of markets, price skimming can be a good option. This method consists of applying a very high price at first for the “early adopters” and then gradually decreasing the price over time. The opposite way of pricing is the penetration method. This quite aggressive type of pricing means setting the price at a very low level in order to increase the demand, and then later raising it up.

Categories: Tags: pricing insights No Comments

Price elasticity is a poor angle for looking at demand planning

Published on by Joannes Vermorel.

Lokad regularly gets asked to leverage an approach based on the price elasticity of demand for demand planning purposes; most notably to handle promotions. Unfortunately, statistical forecasting is counter-intuitive, and while leveraging demand elasticity might feel like a “good” approach, our extensive experience with promotions indicates that this approach is misguided and nearly always does more harm than good. Let’s briefly review what goes wrong with price elasticity.

A local indicator

Price elasticity is fundamentally a local indicator - in a mathematical sense. So while if it is possible to compute the local coefficient of the price elasticity of demand, there is no guarantee that this local coefficient has any similarity with other coefficients that would be computed for alternate prices.

For example, it might make sense for McDonald’s to assess the elasticity coefficient for, say, the Big Mac moving from $3.99 to $3.89 because it’s a small price move - of about 2.5% in amplitude - and the new price remains very close to the old price. And given McDonald’s scale of activity, it’s not unreasonable to assume that the function of demand is relatively smooth in respect to the price.

At the other end of the spectrum, promotions, especially promotions in the FMCG (fast moving consumer goods) and general merchandize sectors, are completely unlike the McDonald’s case described above. A promotion typically shifts the price by more than 20%, which is an entirely non-local move, yielding very erratic results, which is completely unlike the smooth macro-effects that may be observed for McDonald's and its Big Mac.

Thresholds all over the place

The price elasticity insight is fundamentally geared towards smooth differentiable functions of demand. Oh yes, it is theoretically possible to approximate even a very rugged function with a differentiable one, but in practice, the numerical performance of this viewpoint is very poor. Indeed, markets are full of threshold effects: if customers are very price sensitive, then being able to offer them a price just a little bit lower than any competitors can alter the market share rather dramatically. In such markets, it’s unreasonable to assume that demand will smoothly respond to price changes. On the contrary, demand responses should be expected to be swift and erratic.

Hidden co-variables

Last but not least, one fundamental issue with using price elasticity for demand planning in the context of promotions, is that the price elasticity puts too much emphasis on the pricing aspect of demand. There are other variables, the so-called co-variables, that have a deep influence on the overall level of demand. These co-variables too often remain hidden, even though identifying them is very much feasible.

Indeed, a promotion is first and foremost a negotiation that takes place between a supplier and a distributor. The expected increase in demand does certainly depend on the price, but our observations indicate that changes in demand primarily depend on the way a given promotion is executed by the distributor. Indeed, the commitment on extra volume, a strong promotional message, additional or better-located shelf space and the potential temporary de-emphasis of competing products typically impact demand in ways that dwarf the pricing impact when it's examined on its own.

Reducing the promotional uplift to a matter of price elasticity is frequently a misguided numerical approach standing in the way of better demand planning. A deep understanding of the structure of promotions is more important than the prices.

Categories: Tags: promotion forecasting No Comments

Streetlight effect and forecasting

Published on by Joannes Vermorel.

A policeman sees a drunk man searching for something under a streetlight and asks what the drunk has lost. He says he lost his keys and they both look under the streetlight together. After a few minutes the policeman asks if he is sure he lost them here, and the drunk replies, no, and that he lost them in the park. The policeman asks why he is searching here, and the drunk replies, "this is where the light is." David H. Freedman (2010). Wrong: Why Experts Keep Failing Us.

One of the most paradoxical things about “classic” forecasts is that they look for the average – sometimes median – value of the future demand, while this average case, as we will see below, is mostly irrelevant. Whenever daily, weekly or monthly forecasts are being used, these can be considered as average forecasts. Why? Because, other kinds of forecasts, like quantile forecasts, are not additive which makes them fairly counter-intuitive. In fact, most supply chain practitioners aren’t even aware that alternatives to "classic" forecasts exist in the first place.

However, business-wise, as far as inventory is concerned, it’s not the middle ground that costs money, rather it’s the extremes. On the one hand, there is the unexpectedly high demand that causes a stock-out. On the other hand, there is the unexpectedly low demand that causes dead inventory. When the demand level is roughly where it was expected to be, inventory levels gently fluctuate, and inventory rotates very satisfyingly.

As a result, there is no point in optimizing the average case, i.e. when inventory is rotating very satisfyingly, because there is little or nothing to improve in the first place. It’s the extremes that need to be taken care of. Actually, most practitioners are keenly aware of this issue, as their top 2 problems are to improve the service quality on the one hand (i.e. mitigating the unexpectedly high demand), while keeping the stock levels in check on the other hand (i.e. mitigating the unexpectedly low demand).

Yet, since we have agreed that supply chain challenges are mainly concerned with the "extremes", why do many companies still look for answers through “average” forecasts? I believe that supply chain management, as an industry, is suffering from a bad case of drunkard’s search, a problem called the streetlight effect. Classical tools and processes are shedding light on “average” situations which barely need to be enlightened any further, while leaving entirely in the dark whatever lies at the extremes.

A frequent misconception consists of thinking that improving the “middle” case should also marginally improve the extremes. Alas, statistical forecasting is counter-intuitive and basic numerical analysis shows that this is simply not the case. Statistical forecasting is like a microscope: while being incredibly sharp, it's focus is also incredibly narrow.

Trying to fix your supply chain problems through classic “average” forecasts is like trying to diagnose what’s wrong with your car which is refusing to start by putting every single car part under a microscope starting with the engine. At this rate, you will probably never manage to diagnose that your car won’t move because there is no more gas, which in hindsight, was a pretty obvious problem.

However, this is not the end of the insanity. Now imagine that the repair guy, after failing to diagnose why your car isn’t moving, started to claim that his diagnosis had failed because his microscope didn’t have enough resolution. And now the repair guy is asking you for more money so that he can buy a better microscope.

Well, a similar scenario is presently happening in many companies: the previous forecasting initiative has failed to deliver the desired inventory performance, and companies double down with another forecasting initiative along the very lines that caused the first initiative to fail in the first place.

At Lokad, it took us 5 years to realize that the classic forecasting approach wasn’t working, and even worse, that it will never be working no matter how much technology we would add to the case, just like switching to a $27M ultra-high resolution microscope would never have helped the repair guy to diagnose your empty tank. In 2012, we uncovered quantile forecasts that we steadily kept improving; and suddenly, things started working.

Those five years of steady ongoing failures felt long, very long. In our defense, when an entire industry works on false promises which can be traced back to university manuals, it’s not that easy to start thinking outside the box when the box itself is so huge that you can spend your life wandering inside in circles and never hitting the walls.

Categories: Tags: insights forecasting No Comments

Magento in beta at Lokad

Published on by Joannes Vermorel.

Just a few days ago we announced Lokad's integration with Shopify. Today, it's the turn of another vastly popular content management system for e-commerce to become natively supported by Lokad as the native integration of Magento is now live in beta at Lokad.

This integration relies on Magento's REST API which has been supported since the version 1.7 which was released back in April 2012. The authentication relies on OAuth. The set-up requires a bit of configuration on the admin panel within Magento to grant access to a third-party app like Lokad. However, thanks to this set-up, you will have very fine-grained control as to which data Lokad can read or write (hint: we only need read-only access).

This integration is still in beta as we haven't yet properly tested our integration with the many versions of Magento that have been released during the last 3 years. Don't hesitate to give it a try though, and Lokad is here to help you get started in case you have any technical difficulties.

Categories: Tags: release magento No Comments

Shopify integrated by Lokad

Published on by Joannes Vermorel.

The retail platform Shopify is our latest integration. Now, Shopify-powered merchants can get advance inventory forecasts and powerfull commerce analytics in just a few clicks. Check-out the Lokad app in the Shopify appstore.

Through the Shopify API, Lokad retrieves all the product and sales data that contribute to your inventory optimization and your pricing optimization. Don't let the competition outservice your business.

As usual, the Lokad team is here to help. This integration is still very recent, and glitches may happen. Don't hesitate to contact us if you face any issue while plugging your Shopify store into Lokad.

Categories: Tags: shopify release No Comments

Forecasting the series of future orders to suppliers

Published on by Joannes Vermorel.

Collaborative supply chain management makes a lot of sense. In today’s day and age of ubiquitous internet connection, why should your suppliers be kept in the dark concerning your upcoming purchase orders? After all, if your company is capable of producing accurate forecasts about your upcoming orders, sharing these forecasts with your suppliers would certainly be of great help to them, which, in turn, would yield better service and/or better prices.

Yes, but all of this relies on a flawed assumption: order forecasts ought to be accurate. Unfortunately, they won’t be. Period. So whatever follows is merely wishful thinking.

Companies frequently get back to us asking if Lokad could forecast the sequence of upcoming purchase orders. After all, we should have everything it takes:

  • daily/weekly future sales levels (forecasted)
  • current stock levels, both on hand and on order
  • purchase constraints

By combining these different elements mentioned above, we could certainly roll-out a simulation, and consequently forecast the upcoming purchase orders for a given period specified by a client. However, although this is something which is possible to do, the results of such an operation would be disastrous. In this short post, we share our insights on this issue to help companies avoid wasting time on such forecasting attempts.

Statistics are terribly counter-intuitive. As mentioned in our previous posts, “intuitive” approaches are most certainly wrong; and the “correct” approaches are unsettling at best.

The central problem with supplier’s order’s forecasting is that the calculations involved are relying on an iterated sum of forecasts; which is very wrong on multiple levels. In particular, forecasting the next purchase order includes not one but two variables: the date of the order and the quantity ordered. Depending on the supply chain constraints, the quantity ordered might be something relatively straightforward to forecast: if you have a minimal order quantity (MOQ), the order is likely to equal the MOQ threshold itself. On the other hand, if the item is expensive and rarely sold, the next quantity to be ordered is likely to be a single unit.

The true challenge lies in forecasting the date of the next purchase order, and even more challenging, forecasting the date of the following purchase order. Indeed, not only does the date of the next purchase order likely to have 20% to 30% error (like pretty much any demand forecast), but the date of the order that follows this last purchase order will have (roughly) twice the error, and the one after that (roughly) three times the error, etc.

As illustrated in the schema above, the uncertainty regarding the date of the Nth upcoming purchase order grows so fast in practice, that it becomes a worthless piece of information for the supplier. The supplier will be much better off doing her own forecasts based on her own demand history, even if this forecast can’t leverage the most recent demand signal, as observed downstream.

However, while forecasting purchase orders and sharing them with the suppliers doesn’t work, moving towards more collaborative supply chain management remains a valid business goal; it just happens that this type of forecasts is not the right way to execute this objective.

Stay tuned, we will make sure to discuss here in due course how collaborative supply chain management can be correctly executed from a predictive perspective.

Categories: Tags: insights forecasting 2 Comments

NetSuite integrated by Lokad

Published on by Joannes Vermorel.

NetSuite was one of the first ERP systems operating fully in SaaS mode. Over the years, the NetSuite solution has steadily expanded, and NetSuite now features an extensive business suite which includes financials, CRM and more.

Today, we are proud to announce that NetSuite is now natively supported by Lokad. Thanks to the SuiteTalk integration (web service), Lokad can import the integrality of NetSuite data and deliver advanced inventory forecasts and/or pricing optimization solutions.

The NetSuite integration is already live. We import inventory items, sales orders and purchase orders. All you need to get started is a Lokad account, and you can get one for free in less than 1 minute.

The Lokad team is here to take extra special care of our early NetSuite-powered clients to make sure everything goes very smoothly.

Categories: Tags: release netsuite No Comments