Joining tables with Envision

Published on by Joannes Vermorel.

When it comes to supply chain optimization, it’s important to accommodate the challenges while minimizing the amount of reality distortion that get introduced in the process. The tools should embrace the challenge as it stands instead of distorting the challenge to make it fit within the tools.

Two years ago, we introduced Envision, a domain-specific language, precisely intended as a way to accommodate the incredibly diverse range of situations found in supply chain. From day 1, Envision was offering a programmatic expressiveness which was a significant step forward compared to traditional supply chain tools. However, this flexibility was still limited by the actual viewpoint taken by Envision itself on the supply chain data.

A few months ago, we have introduced a generic JOIN mechanism in Envision. Envision is no more limited by natural joins as it was initially, and offers the possibility to process with a much broader range of tabular data. In supply chain, arbitrary table joins are particularly useful to accommodate complex scenarios such as multi-sourcing, one-way compatibilities, multi-channels, etc.

For the readers who may be familiar with SQL already, joining tables feels like a rather elementary operation; however, in SQL, combining complex numeric calculation with table joins rapidly end up with source code that looks obscure and verbose. Moreover, joining large tables also raises quite a few performance issues which need to be carefully addressed either by adjusting the SQL queries themselves, or by adjusting the database itself throught the introduction of table indexes.

One of the key design goals for Envision was to give up on some of the capabilities of SQL in exchange of a much lower coding overhead when facing supply chain optimization challenges. As a result, the initial Envision was solely based on natural joins, which removed almost entirely the coding overhead associated to JOIN operations, as it is usually done in SQL.

Natural joins have their limits however, and we lifted those limits by introducing the left-by syntax within Envision. Through left-by statements, it becomes possible to join arbitrary tables within Envision. Under the hood, Envision takes care of creating optimized indexes to keep the calculations fast even when dealing with giganormous data files.

From a pure syntax perspective, the left-by is a minor addition to the Envision language, however, from a supply chain perspective, this one feature did significantly improve the capacity of Lokad to accommodate the most complex situations.

If don’t have a data scientist in-house that happens to be a supply chain expert too, we do. Lokad can provides an end-to-end service where we take care of implementing your supply chain solution.

Categories: Tags: envision technical release No Comments

Insights on the Lokad tech evolution

Published on by Joannes Vermorel.

The technology of Lokad has evolved so much that people who have had the chance to trial Lokad even two years ago would barely recognize the app as it stands today.

The "old" Lokad was completely centered around our forecasting engine - i.e. what you can see as a forecasting project in your Lokad account today. As a result, our forecasting engine gradually gained tons of features not even remotely related to statistics. About two years ago, our forecasting engine had become a jack-of-all-trade responsible for almost everything:

  • data preparation with the possibility to accommodate a large diversity of data formats
  • reporting analytics with a somewhat complex, and somewhat flexible, Excel forecasting report
  • scheduled execution through a webcron integration or through the API

Then, during the last two years, we have gradually introduced stand-alone replacements for those features that now live outside our forecasting engine. However, calling those new features mere replacements is unfair, because those replacements are vastly more powerful than their original counterparts.

  • We can now process very diverse files, varying in size, in complexity and even in data formats. Plus, we have many data connectors too.
  • The capabilities of our old Excel forecasting report are dwarfed by the newer reporting capabilities of Envision.
  • Scheduling and orchestration are now first-class citizens which encompasse also the data retrieval from other apps.

Because those new features are plainly superior to the old ones, we are gradually phasing out the cruft, that is, phasing out all the non-forecasting related things that still live inside our forecasting engine.

In order to keep the process smooth, we are gradually - but actively - migrating all our clients from the old Lokad to the new Lokad; and when an old feature isn't used anymore, we remove it entirely.

The old Excel forecasting report is a tough case for us. The challenge is not to merely duplicate the report itself within Envision (that alone isn't hard at all) - the challenge is that the underlying thinking that went into this report is now fairly outdated. Indeed, over the years, Lokad has introduced better forecasting technologies - the latest iteration being probabilistic forecasts - which cannot be made to fit within this report. By design, this one report is stuck with a legacy approach to forecasting, which unfortunately is not such a good fit as far as inventory optimization is concerned.

In contrast, combining probabilistic forecasts with business drivers does require more efforts both on the Lokad side and the client side, but business results simply don't compare. The former is about optimizing percents of error while the later optimize dollars of error. Unsurprisingly, once our clients realize how much money they leave on the table not doing the later, they never consider going back to the former.

Then, our data integrations are currently undergoing a similar, and no less radical transformation. When we started developing data connectors, we tried to fit all the data we were retrieving into the framework established by our forecasting engine; that is producing files such as Lokad_Items.tsv, Lokad_Orders.tsv, etc. This approach was initially appealing because it was forcing a normalization on the data retrieved and processed by Lokad.

Unfortunately, this abstraction - like all abstractions - is leaky. All apps don't agree on what exactly is a product or an order; there are tons of subtle differences to be accounted for, and it was simply not possible to accommodate all the business subtleties through some kind of data normalization.

Thus, we have started to take the data integration challenge from another angle: retrieve the app data while preserving as much as possible the original structures and concepts. The main drawback of this approach is that it requires more initial efforts to get results because the data is not transformed upfront to be compatible with all the default expectations of Lokad.

However, because the data doesn't suffer misguided transformations, it also means that Lokad does not get stuck into not being able to accommodate business subtleties because they don't fit the framework. With some programmatic glue, we accommodate the business needs down to the minute details.

Similarly to our old Excel report, the transition toward native data - as opposed to normalized data - follows our experience which indicates that investing a little more in getting the number aligned with the business yields a lot more results.

Categories: Tags: insights No Comments

Stitch Labs Integrated by Lokad

Published on by Noora Kekkonen.


The online, multichannel inventory management software Stitch Labs is our latest integration. By combining Lokad and Stitch Labs, retailers can easily manage all the steps of inventory management - from controlling the stock levels and minimizing the stock-outs to inventory forecasting and automating the reordering process.

By connecting Lokad to Stitch Labs, all of your sales and product data can automatically be imported into Lokad. If you are already using Stitch Labs, all you need to do is to create a Lokad account and you can get started in minutes.

The Lokad team is here to help you to get started. Don’t hesitate to contact us if you have any questions.

Categories: Tags: release stitchlabs No Comments

Full automation ahead

Published on by Noora Kekkonen.

Lokad uses advanced forecasting methods in order to produce the most accurate forecasts possible, and while that accuracy is greater than with classic methods, many large reports can’t be computed instantly in real time. Executing multiple operations in a specific order and retrieving data from other apps can sometimes be time consuming, therefore, Lokad now provides an automation feature which allows the full control of all the operations needed to produce the numbers your company needs.

From simple scheduling to fully controlled sequences

Since being able to schedule operations is a must-have feature in advanced analytics, Lokad already provided this option in the project configuration, but it was quite limited and required an account on a third party scheduling service. Therefore, we have now launched a native automation feature which offers both orchestration and scheduling possibilities.

  Lokad project orchestration showing multiple projects scheduled to run at 0330 UTC daily. The first step (data import) will be skipped if it has been run within the past 6 hours, and if the third step fails the sequence will continue.

Lokad project orchestration showing multiple projects scheduled to run at 0330 UTC daily. The first step (data import) will be skipped if it has been run within the past 6 hours, and if the third step fails the sequence will continue.

Orchestration and scheduling - the two pillars of advanced analytics

With the new automation feature, you can define a specific order for running projects. In this way, the updated data from previous runs can be applied to other projects, as the run will only start when the previous one is completed. The “skip if more recent” option is useful when dealing with long processes. For example, you can set the sequence to auto-skip one or more steps if they have already been run in the last 12 hours.

Scheduling operations allows you to have your reports ready when your company needs them - whether it is on a daily or weekly basis. Some operations require a large amount of data and their execution can take a while. Therefore, Lokad also allows you to set a specific time to start running the sequences. We particularly suggest running projects during the night. In this way you will always have your numbers ready in the morning, without waiting.

Categories: Tags: release technology No Comments

Solving the general MOQ problem

Published on by Joannes Vermorel.

Minimal Order Quantities (MOQs) are ubiquitous in supply chain. At a fundamental level, MOQs represent a simple way for the supplier to indicate that there are savings to be made when products are ordered in batches rather than being ordered unit by unit. From the buyer's perspective, however, dealing with MOQs is far from being a trivial matter. The goal is not merely to satisfy the MOQs - which is easy, just order more - but to satisfy the MOQs while maximizing the ROI.

Lokad has been dealing with MOQs for years already. Yet, so far, we were using numerical heuristics implemented through Envision whenever MOQs were involved. Unfortunately, those heuristics were somewhat tedious to implement repeatedly, and the results we were obtaining were not always as good as we wanted them to be - albeit already a lot better than their "manual" counterparts.

Thus, we finally decided to roll our own non-linear solver for the general MOQ problem. This solver can be accessed through a function named moqsolv in Envision. Solving the general MOQ problem is hard - really hard, and under the hood, it's a fairly complex piece of software that operates. However, through this solver, Lokad now offers a simple and uniform way to deal with all types of MOQs commonly found in commerce or manufacturing.

Categories: Tags: insights supply chain envision No Comments

The Stock Reward Function

Published on by Joannes Vermorel.

The classic way of thinking about replenishment consists of establishing one target quantity per SKU. This target quantity typically takes the form of a reorder point which is dynamically adjusted based on the demand forecast for the SKU. However, over the years at Lokad, we have realized that this approach was very weak in practice, no matter how good the (classic) forecasts.

Savvy supply chain practitioners usually tend to outperform this (classic) approach with a simple trick: instead at looking at SKUs in isolation, they would step back and look at the bigger picture, while taking into consideration the fact that all SKUs compete for the same budget. Then, practitioners would choose the SKUs that seem to be the most pressing. This approach outperforms the usual reorder point method because, unlike the latter, it gives priority to certain replenishments. And as any business manager would know, even very basic task prioritization is better that no prioritization at all.

In order to reproduce this nice “trick”, in early 2015 we upgraded Lokad towards a more powerful form of ordering policy known as prioritized ordering. This policy precisely adopts the viewpoint that all SKUs compete for the next unit to be bought. Thanks to this policy, we are getting the best of both worlds: advanced statistical forecasts combined with the sort of domain expertise which was unavailable to the software so far.

However, the prioritized ordering policy requires a scoring function to operate. Simply put, this function converts the forecasts plus a set of economic variables into a score value. By assigning a specific score to every SKU and every unit of these SKUs, this scoring function offers the possibility to rank all “atomic” purchase decisions. By atomic, we refer to the purchase of 1 extra unit for 1 SKU. As a result, the scoring function should be as aligned to the business drivers as possible. However, while crafting approximate “rule-of-thumb” scoring functions is reasonably simple, defining a proper scoring function is a non-trivial exercise. Without getting too far into the technicalities, the main challenge lies in the “iterated” aspect of the replenishments where the carrying costs keep accruing charges until units get sold. Calculating 1 step ahead is easy, 2 steps ahead a little harder, and N steps ahead is pretty complicated actually.

Not so long ago, we managed to crack this problem with the stock reward function. This function breaks down the challenges through three economic variables: the per-unit profit margin, the per-unit stock-out cost and the per-unit carrying cost. Through the stock reward function, one can get the actual economic impact broken down into margins, stock-outs and carrying costs.

The stock reward function represents a superior alternative to all the scoring functions we have used so far. Actually, it can even be considered as a mini framework that can be adjusted with small (but quite expressive) set of economic variables in order to best tackle the strategic goals of merchants, manufacturers or wholesalers. We recommend using this function whenever probabilistic forecasts are involved.

Over the course of the coming weeks, we will gradually update all our Envision templates and documentation materials to reflect this new Lokad capability.

Categories: Tags: insights supply chain envision No Comments

Hiring a software engineer with taste for compilers and big data

Published on by Joannes Vermorel.

Lokad is growing, we are hiring again.

At Lokad we use Envision, our in-house programming language, to write data analysis scripts and adapt our forecasts to the business constraints of our customers, processing hundreds of gigabytes of data each day.

Envision is a modern, strongly typed, high-performance relational language with inspiration from SQL, Python, R and the Excel approach to column data. Its state-of-the-art compiler performs type and table inference to minimize the need for annotations and uses static analysis to optimize the execution plan and reuse cached data from previous runs, generating scripts in an intermediate language that is compiled down to CIL and allows the injection of custom C# code.

Envision, its compiler and its tooling are still growing, and we are looking for new team members to help us develop it further. You would be contributing to the core compiler code, implementing new language features and optimization modes.

You will benefit from an awesome dev team to support you and from a calm working environment (nobody works in open spaces at Lokad). You will be reporting directly to the CTO of the company.

Some experience of working on compilers, or with operational or denotational semantics, is expected (for a junior position, an university compiler project would qualify). In-depth knowledge of SQL, relational algebra or pure functional languages is a big plus. We do not require any prior knowledge of our development and production stack (C#, .NET, Visual Studio, Azure and Git).

We expect you to be fluent in English.

About us: Lokad is a software company that specializes in Big Data for commerce. We help merchants, and a few other verticals (aerospace, manufacturing), to forecast their inventory and to optimize their prices. We are profitable and we are growing fast. We are closing deals in North America, Europe and Asia. The vast majority of our clients are outside France. We are located 50m from Place d'Italie in Paris (France).

To apply: Drop an email with your resume at

Categories: Tags: hiring No Comments

Optimizing container shipments

Published on by Joannes Vermorel.

Supply chain management has long gone global: even small businesses are now importing goods from overseas whenever they identify the right business opportunities. However, while supply chain data can flow back and forth across the globe at a fraction of the speed of light, physical goods are still mostly freighted via containers with lead times counted in weeks, if not months. On top of that, containers further complicate the task of supply chain practitioners by imposing both volume and weight constraints.

Lokad is now supporting dozens of companies to help optimize their order quantities while taking into account their container shipment constraints. Below, we review some of the most important insights that we have gathered when dealing with demand planning combined with container shipments.

The most frequently overlooked aspect of dealing with containers is probably the importance of the ordering lead times. Indeed, except for extremely large businesses, ordering in containers imposes significant waiting periods between successive orders to the same supplier. Neglecting the ordering lead times results in significant under-estimation of the demand to be covered and causes costly stock-outs. Consequently, the ordering lead time, like the supply lead time, needs to be forecasted too. This makes it not just a demand forecast, but a lead time forecast as well.

Then, the second most overlooked factor is related to how badly the constraints associated with container shipments misfit the classic ordering policies such as order-up-to-level or order-quantity. In reality, such ordering policies fail to satisfy the necessary constraints, and as a result, the ordered quantities either exceed or underuse the entire container capacity. For this reason, supply chain practitioners end up spending a lot of time doing manual corrections in order to get the quantities matching the container capacity. A much more efficient solution implies using a prioritized ordering policy, where items can be added up to the point where the container is full.

When Lokad tackles demand planning in the presence of container shipment constraints, the two primary questions we strive to address are:

  • What is the “best” composition of the next container to be ordered (which items, which quantities)?
  • What is the expected profitability of this next best container?

As long as we can address these two questions, ordering from suppliers becomes a piece of cake. All it takes is refreshing the forecasting “logic” in your Lokad account on a daily basis, and checking whether the next “best” container to be ordered reaches a certain profitability threshold; and when it does, just ordering the suggested quantities. This process is even more flexible than filling the container up to its full capacity as it is possible to consider circumstances where the most profitable containers are not filled up to 100%. In fact, it’s really up to the profitability analysis to decide whether each item is worth putting in the next container or not.

Computing precise estimations of both margins and costs requires a forecasting technology that is capable of considering a myriad of scenarios. At Lokad, we achieve this through probabilistic forecasts: we don’t forecast the average demand, but the probabilities associated to (almost) all future demand levels. Through our probabilistic forecasts, every scenario can be assessed financially and then weighted against its probability. Finally, every container’s potential composition can be assessed through its weighted average of financial outcomes, the weights being the probabilities associated with the respective demand scenarios.

The method for handling container shipments that we have just briefly described might look quite intensive as far as computations are involved. Well, it is. However, the time and expertise of supply chain practitioners is far too valuable to be “burned away” spending countless hours on tweaking Excel sheets.

This leads us to our third most overlooked aspect relating to containers: manually composing containers is a very tedious process, and this process comes at the expense of more fundamental supply chain improvements. Indeed, for small companies, we frequently notice that they could order containers more frequently. However, the process of figuring out the exact composition of containers is so time-consuming that realistically, it can’t be generally done more than once a month. In a similar vein, for larger companies, we also often notice that the opportunities to consolidate shipments from multiple suppliers shipping from the same port are also frequently dismissed not because they are impractical, but simply because this would require using a method that cannot be supported by manual processes.

As a result, in practice, manual container composition "hits" companies in two different ways: first, because the composition of the container isn’t really optimized in the first place, and second, because it consumes most of the supply chain management resources which would be better used for improving the supply chain on the whole.

Lokad’s technology makes it quite straightforward to compose optimized containers in a fully automated manner. Check out our more technical entries in case you would like to tackle the challenge yourself. In practice, our Lokad team is here to assist your company in getting it right, as containers might not be the only constraint that your company is facing: there might be minimum order quantities, warehouse storage capacity, etc.

Categories: Tags: No Comments

Retail pricing strategies

Published on by Noora Kekkonen.

Pricing strategies are an essential part of demand forecasting as prices directly influence demand. All too often companies settle for benchmarking prices, when they actually should benchmark pricing strategies. Therefore, we have extended our knowledge base with a new collection of articles about the most popular pricing methods used in retail.

Pricing concepts

At Lokad we believe in optimizing pricing strategies instead of raw prices. By ‘pricing strategies’, we are referring here to the method of computing optimized prices given the available data and the market conditions. In order to assess the quality of a pricing strategy one might refer to the price elasticity of demand, which is a popular method. However, price elasticity can be misleading as it is a limited indicator of demand.

Depending on the type of the market, retailers can choose to make short or long-time pricing strategies. A high price maximizes short-term profit, but will result in a loss of market share. A low price maximizes long-term profit because it allows a firm to gain market share. In both cases the prices are best when frequently re-evaluated. Repricing software, such as Lokad, aid in this re-evaluation by automatically recomputing prices depending on market conditions.

Most popular pricing strategies

In order to affect the buying behavior, retailers can choose from a vast range of pricing strategies. For instance one may want to increase the willingness to pay by creating product packets and using bundle pricing; or use the same prices as one’s competitors with competitive pricing; or set the prices based on the production costs and the desired level of mark-up with cost-plus pricing.

The decoy pricing method can be applied when one wishes to influence the customer with either a slightly lower product price but with a much lower quality product, or on the contrary, a much higher price with a slightly higher quality product.

One widely used method is odd pricing, which aims to maximize profit by making micro-adjustments in the pricing structure. For example, this could mean setting a price at $17,99 instead of $17. In addition to the price structure, a retailer may want to also optimize the style of the prices. For some types of markets, price skimming can be a good option. This method consists of applying a very high price at first for the “early adopters” and then gradually decreasing the price over time. The opposite way of pricing is the penetration method. This quite aggressive type of pricing means setting the price at a very low level in order to increase the demand, and then later raising it up.

Categories: Tags: pricing insights No Comments

Price elasticity is a poor angle for looking at demand planning

Published on by Joannes Vermorel.

Lokad regularly gets asked to leverage an approach based on the price elasticity of demand for demand planning purposes; most notably to handle promotions. Unfortunately, statistical forecasting is counter-intuitive, and while leveraging demand elasticity might feel like a “good” approach, our extensive experience with promotions indicates that this approach is misguided and nearly always does more harm than good. Let’s briefly review what goes wrong with price elasticity.

A local indicator

Price elasticity is fundamentally a local indicator - in a mathematical sense. So while if it is possible to compute the local coefficient of the price elasticity of demand, there is no guarantee that this local coefficient has any similarity with other coefficients that would be computed for alternate prices.

For example, it might make sense for McDonald’s to assess the elasticity coefficient for, say, the Big Mac moving from $3.99 to $3.89 because it’s a small price move - of about 2.5% in amplitude - and the new price remains very close to the old price. And given McDonald’s scale of activity, it’s not unreasonable to assume that the function of demand is relatively smooth in respect to the price.

At the other end of the spectrum, promotions, especially promotions in the FMCG (fast moving consumer goods) and general merchandize sectors, are completely unlike the McDonald’s case described above. A promotion typically shifts the price by more than 20%, which is an entirely non-local move, yielding very erratic results, which is completely unlike the smooth macro-effects that may be observed for McDonald's and its Big Mac.

Thresholds all over the place

The price elasticity insight is fundamentally geared towards smooth differentiable functions of demand. Oh yes, it is theoretically possible to approximate even a very rugged function with a differentiable one, but in practice, the numerical performance of this viewpoint is very poor. Indeed, markets are full of threshold effects: if customers are very price sensitive, then being able to offer them a price just a little bit lower than any competitors can alter the market share rather dramatically. In such markets, it’s unreasonable to assume that demand will smoothly respond to price changes. On the contrary, demand responses should be expected to be swift and erratic.

Hidden co-variables

Last but not least, one fundamental issue with using price elasticity for demand planning in the context of promotions, is that the price elasticity puts too much emphasis on the pricing aspect of demand. There are other variables, the so-called co-variables, that have a deep influence on the overall level of demand. These co-variables too often remain hidden, even though identifying them is very much feasible.

Indeed, a promotion is first and foremost a negotiation that takes place between a supplier and a distributor. The expected increase in demand does certainly depend on the price, but our observations indicate that changes in demand primarily depend on the way a given promotion is executed by the distributor. Indeed, the commitment on extra volume, a strong promotional message, additional or better-located shelf space and the potential temporary de-emphasis of competing products typically impact demand in ways that dwarf the pricing impact when it's examined on its own.

Reducing the promotional uplift to a matter of price elasticity is frequently a misguided numerical approach standing in the way of better demand planning. A deep understanding of the structure of promotions is more important than the prices.

Categories: Tags: promotion forecasting No Comments

Streetlight effect and forecasting

Published on by Joannes Vermorel.

A policeman sees a drunk man searching for something under a streetlight and asks what the drunk has lost. He says he lost his keys and they both look under the streetlight together. After a few minutes the policeman asks if he is sure he lost them here, and the drunk replies, no, and that he lost them in the park. The policeman asks why he is searching here, and the drunk replies, "this is where the light is." David H. Freedman (2010). Wrong: Why Experts Keep Failing Us.

One of the most paradoxical things about “classic” forecasts is that they look for the average – sometimes median – value of the future demand, while this average case, as we will see below, is mostly irrelevant. Whenever daily, weekly or monthly forecasts are being used, these can be considered as average forecasts. Why? Because, other kinds of forecasts, like quantile forecasts, are not additive which makes them fairly counter-intuitive. In fact, most supply chain practitioners aren’t even aware that alternatives to "classic" forecasts exist in the first place.

However, business-wise, as far as inventory is concerned, it’s not the middle ground that costs money, rather it’s the extremes. On the one hand, there is the unexpectedly high demand that causes a stock-out. On the other hand, there is the unexpectedly low demand that causes dead inventory. When the demand level is roughly where it was expected to be, inventory levels gently fluctuate, and inventory rotates very satisfyingly.

As a result, there is no point in optimizing the average case, i.e. when inventory is rotating very satisfyingly, because there is little or nothing to improve in the first place. It’s the extremes that need to be taken care of. Actually, most practitioners are keenly aware of this issue, as their top 2 problems are to improve the service quality on the one hand (i.e. mitigating the unexpectedly high demand), while keeping the stock levels in check on the other hand (i.e. mitigating the unexpectedly low demand).

Yet, since we have agreed that supply chain challenges are mainly concerned with the "extremes", why do many companies still look for answers through “average” forecasts? I believe that supply chain management, as an industry, is suffering from a bad case of drunkard’s search, a problem called the streetlight effect. Classical tools and processes are shedding light on “average” situations which barely need to be enlightened any further, while leaving entirely in the dark whatever lies at the extremes.

A frequent misconception consists of thinking that improving the “middle” case should also marginally improve the extremes. Alas, statistical forecasting is counter-intuitive and basic numerical analysis shows that this is simply not the case. Statistical forecasting is like a microscope: while being incredibly sharp, it's focus is also incredibly narrow.

Trying to fix your supply chain problems through classic “average” forecasts is like trying to diagnose what’s wrong with your car which is refusing to start by putting every single car part under a microscope starting with the engine. At this rate, you will probably never manage to diagnose that your car won’t move because there is no more gas, which in hindsight, was a pretty obvious problem.

However, this is not the end of the insanity. Now imagine that the repair guy, after failing to diagnose why your car isn’t moving, started to claim that his diagnosis had failed because his microscope didn’t have enough resolution. And now the repair guy is asking you for more money so that he can buy a better microscope.

Well, a similar scenario is presently happening in many companies: the previous forecasting initiative has failed to deliver the desired inventory performance, and companies double down with another forecasting initiative along the very lines that caused the first initiative to fail in the first place.

At Lokad, it took us 5 years to realize that the classic forecasting approach wasn’t working, and even worse, that it will never be working no matter how much technology we would add to the case, just like switching to a $27M ultra-high resolution microscope would never have helped the repair guy to diagnose your empty tank. In 2012, we uncovered quantile forecasts that we steadily kept improving; and suddenly, things started working.

Those five years of steady ongoing failures felt long, very long. In our defense, when an entire industry works on false promises which can be traced back to university manuals, it’s not that easy to start thinking outside the box when the box itself is so huge that you can spend your life wandering inside in circles and never hitting the walls.

Categories: Tags: insights forecasting 1 Comment

Magento in beta at Lokad

Published on by Joannes Vermorel.

Just a few days ago we announced Lokad's integration with Shopify. Today, it's the turn of another vastly popular content management system for e-commerce to become natively supported by Lokad as the native integration of Magento is now live in beta at Lokad.

This integration relies on Magento's REST API which has been supported since the version 1.7 which was released back in April 2012. The authentication relies on OAuth. The set-up requires a bit of configuration on the admin panel within Magento to grant access to a third-party app like Lokad. However, thanks to this set-up, you will have very fine-grained control as to which data Lokad can read or write (hint: we only need read-only access).

This integration is still in beta as we haven't yet properly tested our integration with the many versions of Magento that have been released during the last 3 years. Don't hesitate to give it a try though, and Lokad is here to help you get started in case you have any technical difficulties.

Categories: Tags: release magento No Comments

Shopify integrated by Lokad

Published on by Joannes Vermorel.

The retail platform Shopify is our latest integration. Now, Shopify-powered merchants can get advance inventory forecasts and powerfull commerce analytics in just a few clicks. Check-out the Lokad app in the Shopify appstore.

Through the Shopify API, Lokad retrieves all the product and sales data that contribute to your inventory optimization and your pricing optimization. Don't let the competition outservice your business.

As usual, the Lokad team is here to help. This integration is still very recent, and glitches may happen. Don't hesitate to contact us if you face any issue while plugging your Shopify store into Lokad.

Categories: Tags: shopify release No Comments

Forecasting the series of future orders to suppliers

Published on by Joannes Vermorel.

Collaborative supply chain management makes a lot of sense. In today’s day and age of ubiquitous internet connection, why should your suppliers be kept in the dark concerning your upcoming purchase orders? After all, if your company is capable of producing accurate forecasts about your upcoming orders, sharing these forecasts with your suppliers would certainly be of great help to them, which, in turn, would yield better service and/or better prices.

Yes, but all of this relies on a flawed assumption: order forecasts ought to be accurate. Unfortunately, they won’t be. Period. So whatever follows is merely wishful thinking.

Companies frequently get back to us asking if Lokad could forecast the sequence of upcoming purchase orders. After all, we should have everything it takes:

  • daily/weekly future sales levels (forecasted)
  • current stock levels, both on hand and on order
  • purchase constraints

By combining these different elements mentioned above, we could certainly roll-out a simulation, and consequently forecast the upcoming purchase orders for a given period specified by a client. However, although this is something which is possible to do, the results of such an operation would be disastrous. In this short post, we share our insights on this issue to help companies avoid wasting time on such forecasting attempts.

Statistics are terribly counter-intuitive. As mentioned in our previous posts, “intuitive” approaches are most certainly wrong; and the “correct” approaches are unsettling at best.

The central problem with supplier’s order’s forecasting is that the calculations involved are relying on an iterated sum of forecasts; which is very wrong on multiple levels. In particular, forecasting the next purchase order includes not one but two variables: the date of the order and the quantity ordered. Depending on the supply chain constraints, the quantity ordered might be something relatively straightforward to forecast: if you have a minimal order quantity (MOQ), the order is likely to equal the MOQ threshold itself. On the other hand, if the item is expensive and rarely sold, the next quantity to be ordered is likely to be a single unit.

The true challenge lies in forecasting the date of the next purchase order, and even more challenging, forecasting the date of the following purchase order. Indeed, not only does the date of the next purchase order likely to have 20% to 30% error (like pretty much any demand forecast), but the date of the order that follows this last purchase order will have (roughly) twice the error, and the one after that (roughly) three times the error, etc.

As illustrated in the schema above, the uncertainty regarding the date of the Nth upcoming purchase order grows so fast in practice, that it becomes a worthless piece of information for the supplier. The supplier will be much better off doing her own forecasts based on her own demand history, even if this forecast can’t leverage the most recent demand signal, as observed downstream.

However, while forecasting purchase orders and sharing them with the suppliers doesn’t work, moving towards more collaborative supply chain management remains a valid business goal; it just happens that this type of forecasts is not the right way to execute this objective.

Stay tuned, we will make sure to discuss here in due course how collaborative supply chain management can be correctly executed from a predictive perspective.

Categories: Tags: insights forecasting 2 Comments

NetSuite integrated by Lokad

Published on by Joannes Vermorel.

NetSuite was one of the first ERP systems operating fully in SaaS mode. Over the years, the NetSuite solution has steadily expanded, and NetSuite now features an extensive business suite which includes financials, CRM and more.

Today, we are proud to announce that NetSuite is now natively supported by Lokad. Thanks to the SuiteTalk integration (web service), Lokad can import the integrality of NetSuite data and deliver advanced inventory forecasts and/or pricing optimization solutions.

The NetSuite integration is already live. We import inventory items, sales orders and purchase orders. All you need to get started is a Lokad account, and you can get one for free in less than 1 minute.

The Lokad team is here to take extra special care of our early NetSuite-powered clients to make sure everything goes very smoothly.

Categories: Tags: release netsuite No Comments

Supply Chain Antipatterns, a Lokad initiative

Published on by Joannes Vermorel.

Most of the supply chain initiatives fail. Yet, over the years at Lokad, we started to realize that those failures are far from random: there are recurrent patterns that invariably lead to failure. Thus, we decided to start a side initiative to survey the most frequent misguided responses to supply chain challenges through our new blog Supply Chain Antipatterns.

The blog comes with real comic strips produced in-house by the Lokad team.

This initiative is intended as a collaborative effort. Don't hesitate to tell the tale of our own misguided attempts (you may remain anonymous too). It might help more than a few companies to avoid falling for the same problem in the future.

Categories: Tags: antipatterns insights supply chain No Comments

Beyond software: inventory optimization as a service

Published on by Joannes Vermorel.

SaaS is the promise of no software. However, as far inventory optimization is concerned, no matter how good the service, software alone can’t address the full challenge. Delivering inventory performance take significant efforts:

  • Quantitative performance metrics must be carefully aligned with business goals; otherwise the “system” is just going to let your business accelerate in the wrong direction.
  • Historical data must be thoroughly qualified ; otherwise, the “system” will fall for the too common garbage in, garbage out problems.
  • The statistical tools must be handled with care; in particular, statistics can be terribly counter-intuitive, and the incorrect usage of the tools won’t yield the expected results.

As we noticed that our clients were frequently struggling with those challenges, more than one year ago, we started offering end-to-end inventory optimization services to clients that we upgraded from their original software-only subscription plans toward plans with very hands-on support.

What started as a favor for our largest clients turned out to work exceedingly well. The Lokad team managed to deliver very tangible ROI even for very tough challenges like the ones we face with aerospace supply chain. Moreover, we learned to deliver ROI measured against the very specific KPIs established by the client herself, which is not only the right thing to do, but also a great way to establish trust.

Today, we are starting to offer Premier subscription plans: end-to-end inventory optimization services. With a Premier plan, the Lokad team get hands-on, and the motto is simply do whatever it takes to deliver inventory performance. Naturally having access to both state-of-the-art forecasts and a power tools for supply chain analytics is a good starting point for the supply chain specialists working at Lokad.

Do you feel that you companies is holding too much inventory to serve too few clients? Yet, the prospect of hiring a team of data scientists / supply chain specialists look daunting. Just contact us, we can help.

Categories: Tags: business release No Comments

Data qualification is critical

Published on by Joannes Vermorel.

Wikipedia lists seven steps for a data analysis process: data requirements, data collection, data processing, data cleaning, exploratory data analysis, data modeling, and finally the generation of production results. When Lokad forecasts inventory, optimizes prices, or anytime we tackle some kind of commerce optimization, our process is very similar to the one described above. However, there is another one vital step that typically accounts for more than half of all the effort typically applied by Lokad’s team and that is not even part of the list above. This step is the data qualification.

Now that “Big Data” has become a buzzword, myriads of companies are trying to do more with their data. Data qualification is probably the second largest cause of project failures, right after unclear or unwise business goals - which happens anytime an initiative starts from the “solution” rather than starting from the “problem”. Let’s shed some light on this mysterious “data qualification” step.

Data as a by-product of business apps

The vast majority of business software is designed to help operate companies: the Point-Of-Sale system is there to allow clients pay; the Warehouse Management System is there to pick and store products; the Web Conferencing software lets people carry out their meetings online, etc. Such software might be producing data too, but data is only a secondary by-product of the primary purpose of this software.

The systems mentioned are designed to operate the business, and as a result, whenever a practionner has to choose between better operations or better data, better operations will always always be favored. For example, if a barcode fails when being scanned at the point of sale of your local hypermarket, the cashier will invariably choose a product that happens to have the same price and scan it twice; sometimes they even have they cheat sheet of barcodes all gathered on a piece of paper. The cashier is right: the No1 priority is to let the client pay no matter what. Generating accurate stock records is not an immediate goal when compared to the urgent need of servicing a line of clients.

One might argue that the barcode scanning issue is actually a data cleaning issue. However, the situation is quite subtle: records remain accurate to some extent since the amount charged to the client remains correct and so does the count of items in the basket. Naively filtering out all the suspicious records would do more harm than good for most analysis.

Yet, we observe that too often, companies – and their software vendors too – enthusiastically ignore this fundamental pattern for nearly all business data that are generated, jumping straight from data processing to data cleaning.

Data qualification relates to the semantic of the data

The goal of the data qualification step is to clarify and thoroughly document the semantic of the data. Most of the time, when (large) companies send tabular data files to Lokad, they also send us an Excel sheet, where each column found in the files gets a short line of documentation, typically like: Price: the price of the product. However, such a brief documentation line leaves a myriad of questions open:

  • what is the currency applicable for the product?
  • is it a price with or without tax?
  • is there some other variable (like a discount) that impacts the actual price?
  • is it really the same price for the product across all channels?
  • is the price value supposed to make sense for products that are not yet sold?
  • are there edge-case situations like zeros to reflect missing values?

Dates are also excellent candidates for semantic ambiguities when an orders table contains a date column, the date-time can refer to the time of:

  • the basket validation
  • the payment entry
  • the payment clearance
  • the creation of the order in the accounting package
  • the dispatch
  • the delivery
  • the cloture of the order

However, such a shortlist hardly covers actual oddities encountered in real-life situations. Recently, for example, while working for one of the largest European online businesses, we realized that the dates associated with purchase orders did not have the same meaning dependong on the originating country of the supplier factories. European suppliers were shipping using trucks and the date reflected the arrival in the warehouse; while Asian suppliers were shipping using, well, ships, and the date reflected the arrival to the port. This little twist typically accounted for more than 10 days of difference in the lead time calculation.

For business-related datasets, the semantic of the data is nearly always dependent on the underlying company processes and practices. Documentation relating to such processes, when it exists at all, typically focuses on what is of interest to the management or the auditors, but very rarely on the myriad of tiny elements that exist within the company IT landscape. Yet, the devil is in the details.

Data qualification is not data cleaning

Data cleaning (or cleansing) makes most sense in experimental sciences where certain data points (outliers) need to be removed because they would incorrectly “bend” the experiments. For example, chart measurements in an optics experiment might simply reflect a defect in the optical sensor rather than something actually relevant to the study.

However, this process does not reflect what is typically needed while analyzing business data. Outliers might be encountered when dealing with the leftovers of a botched database recovery, but mostly, outliers are marginal. The (business-wise) integrity of the vast majority of databases currently in production is excellent. Erroneous entries exist, but most modern systems are doing a good job at preventing the most frequent ones, and are quite supportive when it comes to fixing them afterwards as well. However, data qualification is very different in the sense that the goal is neither to remove or correct data points, but rather to shed light on the data as a whole, so that subsequent analysis truly makes sense. The only thing that gets “altered” by the data qualification process is the original data documentation.

Data qualification is the bulk of the effort

While working with dozens of data-driven projects related to commerce, aerospace, hostelry, bioinformatics, energy, we have observed that data qualification has always been the most demanding step of the project. Machine learning algorithms might appear sophisticated, but as long as the initiative remains within the well-known boundaries of regression or classification problems, success in machine learning is mostly a matter of prior domain knowledge. The same goes for Big Data processing.

Data qualification problems are insidious because you don’t know what you’re missing: this is the semantic gap between the “true” semantic as it should be understood in terms of the data produced by the systems in place, and the “actual” semantic, as perceived by the people carrying out data analysis. What you don’t know can hurt you. Sometimes, the semantic gap completely invalidates the entire analysis.

We observe that most IT practitioners vastly under-estimate the depth of peculiarities that comes with most real-life business datasets. Most business don’t even have a full line of documentation per table field. Yet, we typically find that even with half a page of documentation per field, the documentation is still far from being thorough.

One of the (many) challenges faced by Lokad is that it is difficult to charge for something that is not even perceived as a need in the first place. Thus, we frequently shovel data qualification work under the guise of more noble tasks like “statistical algorithm tuning” or similar scientific-sounding tasks.

The reality of the work however is that data qualification is not only intensive from a manpower perspective, it’s also a truly challenging task in itself. It’s a mix between understanding the business, understanding how processes spread over many systems - some of them invariably of the legacy kind, and bridging the gap between the data as it exits and the expectations of the machine learning pipeline.

Most companies vastly underinvest in data qualification. In addition to being an underestimated challenge, investing talent on data qualification does not result in a flashy demo or even actual numbers. As a result, companies rush to the later stages of the data analysis process only to find themselves swimming in molasses because nothing really works as expected. There is no quick-fix for an actual understanding of the data.

Categories: Tags: insights bigdata No Comments

Hiring our Chief Marketing Officer!

Published on by Joannes Vermorel.

We are hiring a lead generation wizard!

Lokad is a software company that specializes in quantitative optimization for commerce. We help merchants, and a few other verticals, to forecast their inventory and to optimize their prices. We are profitable; we are still small but growing fast. We are closing deals North America, Europe and Asia. The vast majority of our clients are not in France.

Lokad is sold through the web, almost exclusively relying on inbound marketing. We have hundreds of leads per month, but we are aiming for thousands. So far marketing was done part time by the founder, but it's time to put marketing in more capable hands.

As the Chief Marketing Officer at Lokad, you will have one metric: the number of qualified leads; and we expect you to own a lead commit as well. At this stage, we do not care about corporate marketing, only lead generation matters. The web is the native marketing channel of Lokad. While other channels can be leveraged, we expect you to steadily increase the presence of Lokad on the web to generate the bulk of lead growth.

Our technology is very noticeable, and we need you to make sure that decision makers do notice. Our reach is the world. Lokad is already available in many languages beyond English.

We are located 50m from Place d'Italie, Paris, France.

Desired Skills and Experience

You have two years or more in lead generation marketing for a B2B SaaS company. With a bit of help from a graphic designer, you can deliver awesome web marketing materials. Your written communication skills are top notch, and big bonus to you if you have a blog with some audience. B2B stuff is usually boring, and non-viral, and yet, you can make things happen: you can vanquish the market inertia and make people pay attention. Naturally, you are perfectly fluent in English. Speaking French is a bonus but not a requirement.

Categories: Tags: hiring No Comments

SkuVault natively integrated

Published on by Joannes Vermorel.

SkuVault is a warehouse management software tailored for eCommerce. We are pleased to announce that SkuVault is now natively supported by Lokad. Importing the SkuVault historical data into Lokad can now be done with a single click - or no clicks at all using our scheduling feature. Now SkuVault-powered businesses can get advance inventory forecasts as well as powerfull commerce analytics within minutes.

Categories: Tags: partners No Comments