Optimizing container shipments

Published on by Joannes Vermorel.

Supply chain management has long gone global: even small businesses are now importing goods from overseas whenever they identify the right business opportunities. However, while supply chain data can flow back and forth across the globe at a fraction of the speed of light, physical goods are still mostly freighted via containers with lead times counted in weeks, if not months. On top of that, containers further complicate the task of supply chain practitioners by imposing both volume and weight constraints.

Lokad is now supporting dozens of companies to help optimize their order quantities while taking into account their container shipment constraints. Below, we review some of the most important insights that we have gathered when dealing with demand planning combined with container shipments.

The most frequently overlooked aspect of dealing with containers is probably the importance of the ordering lead times. Indeed, except for extremely large businesses, ordering in containers imposes significant waiting periods between successive orders to the same supplier. Neglecting the ordering lead times results in significant under-estimation of the demand to be covered and causes costly stock-outs. Consequently, the ordering lead time, like the supply lead time, needs to be forecasted too. This makes it not just a demand forecast, but a lead time forecast as well.

Then, the second most overlooked factor is related to how badly the constraints associated with container shipments misfit the classic ordering policies such as order-up-to-level or order-quantity. In reality, such ordering policies fail to satisfy the necessary constraints, and as a result, the ordered quantities either exceed or underuse the entire container capacity. For this reason, supply chain practitioners end up spending a lot of time doing manual corrections in order to get the quantities matching the container capacity. A much more efficient solution implies using a prioritized ordering policy, where items can be added up to the point where the container is full.

When Lokad tackles demand planning in the presence of container shipment constraints, the two primary questions we strive to address are:

  • What is the “best” composition of the next container to be ordered (which items, which quantities)?
  • What is the expected profitability of this next best container?

As long as we can address these two questions, ordering from suppliers becomes a piece of cake. All it takes is refreshing the forecasting “logic” in your Lokad account on a daily basis, and checking whether the next “best” container to be ordered reaches a certain profitability threshold; and when it does, just ordering the suggested quantities. This process is even more flexible than filling the container up to its full capacity as it is possible to consider circumstances where the most profitable containers are not filled up to 100%. In fact, it’s really up to the profitability analysis to decide whether each item is worth putting in the next container or not.

Computing precise estimations of both margins and costs requires a forecasting technology that is capable of considering a myriad of scenarios. At Lokad, we achieve this through probabilistic forecasts: we don’t forecast the average demand, but the probabilities associated to (almost) all future demand levels. Through our probabilistic forecasts, every scenario can be assessed financially and then weighted against its probability. Finally, every container’s potential composition can be assessed through its weighted average of financial outcomes, the weights being the probabilities associated with the respective demand scenarios.

The method for handling container shipments that we have just briefly described might look quite intensive as far as computations are involved. Well, it is. However, the time and expertise of supply chain practitioners is far too valuable to be “burned away” spending countless hours on tweaking Excel sheets.

This leads us to our third most overlooked aspect relating to containers: manually composing containers is a very tedious process, and this process comes at the expense of more fundamental supply chain improvements. Indeed, for small companies, we frequently notice that they could order containers more frequently. However, the process of figuring out the exact composition of containers is so time-consuming that realistically, it can’t be generally done more than once a month. In a similar vein, for larger companies, we also often notice that the opportunities to consolidate shipments from multiple suppliers shipping from the same port are also frequently dismissed not because they are impractical, but simply because this would require using a method that cannot be supported by manual processes.

As a result, in practice, manual container composition "hits" companies in two different ways: first, because the composition of the container isn’t really optimized in the first place, and second, because it consumes most of the supply chain management resources which would be better used for improving the supply chain on the whole.

Lokad’s technology makes it quite straightforward to compose optimized containers in a fully automated manner. Check out our more technical entries in case you would like to tackle the challenge yourself. In practice, our Lokad team is here to assist your company in getting it right, as containers might not be the only constraint that your company is facing: there might be minimum order quantities, warehouse storage capacity, etc.

Categories: Tags: No Comments

Retail pricing strategies

Published on by Noora Kekkonen.

Pricing strategies are an essential part of demand forecasting as prices directly influence demand. All too often companies settle for benchmarking prices, when they actually should benchmark pricing strategies. Therefore, we have extended our knowledge base with a new collection of articles about the most popular pricing methods used in retail.

Pricing concepts

At Lokad we believe in optimizing pricing strategies instead of raw prices. By ‘pricing strategies’, we are referring here to the method of computing optimized prices given the available data and the market conditions. In order to assess the quality of a pricing strategy one might refer to the price elasticity of demand, which is a popular method. However, price elasticity can be misleading as it is a limited indicator of demand.

Depending on the type of the market, retailers can choose to make short or long-time pricing strategies. A high price maximizes short-term profit, but will result in a loss of market share. A low price maximizes long-term profit because it allows a firm to gain market share. In both cases the prices are best when frequently re-evaluated. Repricing software, such as Lokad, aid in this re-evaluation by automatically recomputing prices depending on market conditions.

Most popular pricing strategies

In order to affect the buying behavior, retailers can choose from a vast range of pricing strategies. For instance one may want to increase the willingness to pay by creating product packets and using bundle pricing; or use the same prices as one’s competitors with competitive pricing; or set the prices based on the production costs and the desired level of mark-up with cost-plus pricing.

The decoy pricing method can be applied when one wishes to influence the customer with either a slightly lower product price but with a much lower quality product, or on the contrary, a much higher price with a slightly higher quality product.

One widely used method is odd pricing, which aims to maximize profit by making micro-adjustments in the pricing structure. For example, this could mean setting a price at $17,99 instead of $17. In addition to the price structure, a retailer may want to also optimize the style of the prices. For some types of markets, price skimming can be a good option. This method consists of applying a very high price at first for the “early adopters” and then gradually decreasing the price over time. The opposite way of pricing is the penetration method. This quite aggressive type of pricing means setting the price at a very low level in order to increase the demand, and then later raising it up.

Categories: Tags: pricing, insights No Comments

Price elasticity is a poor angle for looking at demand planning

Published on by Joannes Vermorel.

Lokad regularly gets asked to leverage an approach based on the price elasticity of demand for demand planning purposes; most notably to handle promotions. Unfortunately, statistical forecasting is counter-intuitive, and while leveraging demand elasticity might feel like a “good” approach, our extensive experience with promotions indicates that this approach is misguided and nearly always does more harm than good. Let’s briefly review what goes wrong with price elasticity.

A local indicator

Price elasticity is fundamentally a local indicator - in a mathematical sense. So while if it is possible to compute the local coefficient of the price elasticity of demand, there is no guarantee that this local coefficient has any similarity with other coefficients that would be computed for alternate prices.

For example, it might make sense for McDonald’s to assess the elasticity coefficient for, say, the Big Mac moving from $3.99 to $3.89 because it’s a small price move - of about 2.5% in amplitude - and the new price remains very close to the old price. And given McDonald’s scale of activity, it’s not unreasonable to assume that the function of demand is relatively smooth in respect to the price.

At the other end of the spectrum, promotions, especially promotions in the FMCG (fast moving consumer goods) and general merchandize sectors, are completely unlike the McDonald’s case described above. A promotion typically shifts the price by more than 20%, which is an entirely non-local move, yielding very erratic results, which is completely unlike the smooth macro-effects that may be observed for McDonald's and its Big Mac.

Thresholds all over the place

The price elasticity insight is fundamentally geared towards smooth differentiable functions of demand. Oh yes, it is theoretically possible to approximate even a very rugged function with a differentiable one, but in practice, the numerical performance of this viewpoint is very poor. Indeed, markets are full of threshold effects: if customers are very price sensitive, then being able to offer them a price just a little bit lower than any competitors can alter the market share rather dramatically. In such markets, it’s unreasonable to assume that demand will smoothly respond to price changes. On the contrary, demand responses should be expected to be swift and erratic.

Hidden co-variables

Last but not least, one fundamental issue with using price elasticity for demand planning in the context of promotions, is that the price elasticity puts too much emphasis on the pricing aspect of demand. There are other variables, the so-called co-variables, that have a deep influence on the overall level of demand. These co-variables too often remain hidden, even though identifying them is very much feasible.

Indeed, a promotion is first and foremost a negotiation that takes place between a supplier and a distributor. The expected increase in demand does certainly depend on the price, but our observations indicate that changes in demand primarily depend on the way a given promotion is executed by the distributor. Indeed, the commitment on extra volume, a strong promotional message, additional or better-located shelf space and the potential temporary de-emphasis of competing products typically impact demand in ways that dwarf the pricing impact when it's examined on its own.

Reducing the promotional uplift to a matter of price elasticity is frequently a misguided numerical approach standing in the way of better demand planning. A deep understanding of the structure of promotions is more important than the prices.

Categories: Tags: promotion, forecasting No Comments

Streetlight effect and forecasting

Published on by Joannes Vermorel.

A policeman sees a drunk man searching for something under a streetlight and asks what the drunk has lost. He says he lost his keys and they both look under the streetlight together. After a few minutes the policeman asks if he is sure he lost them here, and the drunk replies, no, and that he lost them in the park. The policeman asks why he is searching here, and the drunk replies, "this is where the light is." David H. Freedman (2010). Wrong: Why Experts Keep Failing Us.

One of the most paradoxical things about “classic” forecasts is that they look for the average – sometimes median – value of the future demand, while this average case, as we will see below, is mostly irrelevant. Whenever daily, weekly or monthly forecasts are being used, these can be considered as average forecasts. Why? Because, other kinds of forecasts, like quantile forecasts, are not additive which makes them fairly counter-intuitive. In fact, most supply chain practitioners aren’t even aware that alternatives to "classic" forecasts exist in the first place.

However, business-wise, as far as inventory is concerned, it’s not the middle ground that costs money, rather it’s the extremes. On the one hand, there is the unexpectedly high demand that causes a stock-out. On the other hand, there is the unexpectedly low demand that causes dead inventory. When the demand level is roughly where it was expected to be, inventory levels gently fluctuate, and inventory rotates very satisfyingly.

As a result, there is no point in optimizing the average case, i.e. when inventory is rotating very satisfyingly, because there is little or nothing to improve in the first place. It’s the extremes that need to be taken care of. Actually, most practitioners are keenly aware of this issue, as their top 2 problems are to improve the service quality on the one hand (i.e. mitigating the unexpectedly high demand), while keeping the stock levels in check on the other hand (i.e. mitigating the unexpectedly low demand).

Yet, since we have agreed that supply chain challenges are mainly concerned with the "extremes", why do many companies still look for answers through “average” forecasts? I believe that supply chain management, as an industry, is suffering from a bad case of drunkard’s search, a problem called the streetlight effect. Classical tools and processes are shedding light on “average” situations which barely need to be enlightened any further, while leaving entirely in the dark whatever lies at the extremes.

A frequent misconception consists of thinking that improving the “middle” case should also marginally improve the extremes. Alas, statistical forecasting is counter-intuitive and basic numerical analysis shows that this is simply not the case. Statistical forecasting is like a microscope: while being incredibly sharp, it's focus is also incredibly narrow.

Trying to fix your supply chain problems through classic “average” forecasts is like trying to diagnose what’s wrong with your car which is refusing to start by putting every single car part under a microscope starting with the engine. At this rate, you will probably never manage to diagnose that your car won’t move because there is no more gas, which in hindsight, was a pretty obvious problem.

However, this is not the end of the insanity. Now imagine that the repair guy, after failing to diagnose why your car isn’t moving, started to claim that his diagnosis had failed because his microscope didn’t have enough resolution. And now the repair guy is asking you for more money so that he can buy a better microscope.

Well, a similar scenario is presently happening in many companies: the previous forecasting initiative has failed to deliver the desired inventory performance, and companies double down with another forecasting initiative along the very lines that caused the first initiative to fail in the first place.

At Lokad, it took us 5 years to realize that the classic forecasting approach wasn’t working, and even worse, that it will never be working no matter how much technology we would add to the case, just like switching to a $27M ultra-high resolution microscope would never have helped the repair guy to diagnose your empty tank. In 2012, we uncovered quantile forecasts that we steadily kept improving; and suddenly, things started working.

Those five years of steady ongoing failures felt long, very long. In our defense, when an entire industry works on false promises which can be traced back to university manuals, it’s not that easy to start thinking outside the box when the box itself is so huge that you can spend your life wandering inside in circles and never hitting the walls.

Categories: Tags: insights, forecasting No Comments

Magento in beta at Lokad

Published on by Joannes Vermorel.

Just a few days ago we announced Lokad's integration with Shopify. Today, it's the turn of another vastly popular content management system for e-commerce to become natively supported by Lokad as the native integration of Magento is now live in beta at Lokad.

This integration relies on Magento's REST API which has been supported since the version 1.7 which was released back in April 2012. The authentication relies on OAuth. The set-up requires a bit of configuration on the admin panel within Magento to grant access to a third-party app like Lokad. However, thanks to this set-up, you will have very fine-grained control as to which data Lokad can read or write (hint: we only need read-only access).

This integration is still in beta as we haven't yet properly tested our integration with the many versions of Magento that have been released during the last 3 years. Don't hesitate to give it a try though, and Lokad is here to help you get started in case you have any technical difficulties.

Categories: Tags: release, magento No Comments

Shopify integrated by Lokad

Published on by Joannes Vermorel.

The retail platform Shopify is our latest integration. Now, Shopify-powered merchants can get advance inventory forecasts and powerfull commerce analytics in just a few clicks. Check-out the Lokad app in the Shopify appstore.

Through the Shopify API, Lokad retrieves all the product and sales data that contribute to your inventory optimization and your pricing optimization. Don't let the competition outservice your business.

As usual, the Lokad team is here to help. This integration is still very recent, and glitches may happen. Don't hesitate to contact us if you face any issue while plugging your Shopify store into Lokad.

Categories: Tags: shopify, release No Comments

Forecasting the series of future orders to suppliers

Published on by Joannes Vermorel.

Collaborative supply chain management makes a lot of sense. In today’s day and age of ubiquitous internet connection, why should your suppliers be kept in the dark concerning your upcoming purchase orders? After all, if your company is capable of producing accurate forecasts about your upcoming orders, sharing these forecasts with your suppliers would certainly be of great help to them, which, in turn, would yield better service and/or better prices.

Yes, but all of this relies on a flawed assumption: order forecasts ought to be accurate. Unfortunately, they won’t be. Period. So whatever follows is merely wishful thinking.

Companies frequently get back to us asking if Lokad could forecast the sequence of upcoming purchase orders. After all, we should have everything it takes:

  • daily/weekly future sales levels (forecasted)
  • current stock levels, both on hand and on order
  • purchase constraints

By combining these different elements mentioned above, we could certainly roll-out a simulation, and consequently forecast the upcoming purchase orders for a given period specified by a client. However, although this is something which is possible to do, the results of such an operation would be disastrous. In this short post, we share our insights on this issue to help companies avoid wasting time on such forecasting attempts.

Statistics are terribly counter-intuitive. As mentioned in our previous posts, “intuitive” approaches are most certainly wrong; and the “correct” approaches are unsettling at best.

The central problem with supplier’s order’s forecasting is that the calculations involved are relying on an iterated sum of forecasts; which is very wrong on multiple levels. In particular, forecasting the next purchase order includes not one but two variables: the date of the order and the quantity ordered. Depending on the supply chain constraints, the quantity ordered might be something relatively straightforward to forecast: if you have a minimal order quantity (MOQ), the order is likely to equal the MOQ threshold itself. On the other hand, if the item is expensive and rarely sold, the next quantity to be ordered is likely to be a single unit.

The true challenge lies in forecasting the date of the next purchase order, and even more challenging, forecasting the date of the following purchase order. Indeed, not only does the date of the next purchase order likely to have 20% to 30% error (like pretty much any demand forecast), but the date of the order that follows this last purchase order will have (roughly) twice the error, and the one after that (roughly) three times the error, etc.

As illustrated in the schema above, the uncertainty regarding the date of the Nth upcoming purchase order grows so fast in practice, that it becomes a worthless piece of information for the supplier. The supplier will be much better off doing her own forecasts based on her own demand history, even if this forecast can’t leverage the most recent demand signal, as observed downstream.

However, while forecasting purchase orders and sharing them with the suppliers doesn’t work, moving towards more collaborative supply chain management remains a valid business goal; it just happens that this type of forecasts is not the right way to execute this objective.

Stay tuned, we will make sure to discuss here in due course how collaborative supply chain management can be correctly executed from a predictive perspective.

Categories: Tags: insights, forecasting 2 Comments

NetSuite integrated by Lokad

Published on by Joannes Vermorel.

NetSuite was one of the first ERP systems operating fully in SaaS mode. Over the years, the NetSuite solution has steadily expanded, and NetSuite now features an extensive business suite which includes financials, CRM and more.

Today, we are proud to announce that NetSuite is now natively supported by Lokad. Thanks to the SuiteTalk integration (web service), Lokad can import the integrality of NetSuite data and deliver advanced inventory forecasts and/or pricing optimization solutions.

The NetSuite integration is already live. We import inventory items, sales orders and purchase orders. All you need to get started is a Lokad account, and you can get one for free in less than 1 minute.

The Lokad team is here to take extra special care of our early NetSuite-powered clients to make sure everything goes very smoothly.

Categories: Tags: release, netsuite No Comments

Supply Chain Antipatterns, a Lokad initiative

Published on by Joannes Vermorel.

Most of the supply chain initiatives fail. Yet, over the years at Lokad, we started to realize that those failures are far from random: there are recurrent patterns that invariably lead to failure. Thus, we decided to start a side initiative to survey the most frequent misguided responses to supply chain challenges through our new blog Supply Chain Antipatterns.

The blog comes with real comic strips produced in-house by the Lokad team.

This initiative is intended as a collaborative effort. Don't hesitate to tell the tale of our own misguided attempts (you may remain anonymous too). It might help more than a few companies to avoid falling for the same problem in the future.

Categories: Tags: antipatterns, insights, supply chain No Comments

Beyond software: inventory optimization as a service

Published on by Joannes Vermorel.

SaaS is the promise of no software. However, as far inventory optimization is concerned, no matter how good the service, software alone can’t address the full challenge. Delivering inventory performance take significant efforts:

  • Quantitative performance metrics must be carefully aligned with business goals; otherwise the “system” is just going to let your business accelerate in the wrong direction.
  • Historical data must be thoroughly qualified ; otherwise, the “system” will fall for the too common garbage in, garbage out problems.
  • The statistical tools must be handled with care; in particular, statistics can be terribly counter-intuitive, and the incorrect usage of the tools won’t yield the expected results.

As we noticed that our clients were frequently struggling with those challenges, more than one year ago, we started offering end-to-end inventory optimization services to clients that we upgraded from their original software-only subscription plans toward plans with very hands-on support.

What started as a favor for our largest clients turned out to work exceedingly well. The Lokad team managed to deliver very tangible ROI even for very tough challenges like the ones we face with aerospace supply chain. Moreover, we learned to deliver ROI measured against the very specific KPIs established by the client herself, which is not only the right thing to do, but also a great way to establish trust.

Today, we are starting to offer Premier subscription plans: end-to-end inventory optimization services. With a Premier plan, the Lokad team get hands-on, and the motto is simply do whatever it takes to deliver inventory performance. Naturally having access to both state-of-the-art forecasts and a power tools for supply chain analytics is a good starting point for the supply chain specialists working at Lokad.

Do you feel that you companies is holding too much inventory to serve too few clients? Yet, the prospect of hiring a team of data scientists / supply chain specialists look daunting. Just contact us, we can help.

Categories: Tags: business, release No Comments

Data qualification is critical

Published on by Joannes Vermorel.

Wikipedia lists seven steps for a data analysis process: data requirements, data collection, data processing, data cleaning, exploratory data analysis, data modeling, and finally the generation of production results. When Lokad forecasts inventory, optimizes prices, or anytime we tackle some kind of commerce optimization, our process is very similar to the one described above. However, there is another one vital step that typically accounts for more than half of all the effort typically applied by Lokad’s team and that is not even part of the list above. This step is the data qualification.

Now that “Big Data” has become a buzzword, myriads of companies are trying to do more with their data. Data qualification is probably the second largest cause of project failures, right after unclear or unwise business goals - which happens anytime an initiative starts from the “solution” rather than starting from the “problem”. Let’s shed some light on this mysterious “data qualification” step.

Data as a by-product of business apps

The vast majority of business software is designed to help operate companies: the Point-Of-Sale system is there to allow clients pay; the Warehouse Management System is there to pick and store products; the Web Conferencing software lets people carry out their meetings online, etc. Such software might be producing data too, but data is only a secondary by-product of the primary purpose of this software.

The systems mentioned are designed to operate the business, and as a result, whenever a practionner has to choose between better operations or better data, better operations will always always be favored. For example, if a barcode fails when being scanned at the point of sale of your local hypermarket, the cashier will invariably choose a product that happens to have the same price and scan it twice; sometimes they even have they cheat sheet of barcodes all gathered on a piece of paper. The cashier is right: the No1 priority is to let the client pay no matter what. Generating accurate stock records is not an immediate goal when compared to the urgent need of servicing a line of clients.

One might argue that the barcode scanning issue is actually a data cleaning issue. However, the situation is quite subtle: records remain accurate to some extent since the amount charged to the client remains correct and so does the count of items in the basket. Naively filtering out all the suspicious records would do more harm than good for most analysis.

Yet, we observe that too often, companies – and their software vendors too – enthusiastically ignore this fundamental pattern for nearly all business data that are generated, jumping straight from data processing to data cleaning.

Data qualification relates to the semantic of the data

The goal of the data qualification step is to clarify and thoroughly document the semantic of the data. Most of the time, when (large) companies send tabular data files to Lokad, they also send us an Excel sheet, where each column found in the files gets a short line of documentation, typically like: Price: the price of the product. However, such a brief documentation line leaves a myriad of questions open:

  • what is the currency applicable for the product?
  • is it a price with or without tax?
  • is there some other variable (like a discount) that impacts the actual price?
  • is it really the same price for the product across all channels?
  • is the price value supposed to make sense for products that are not yet sold?
  • are there edge-case situations like zeros to reflect missing values?

Dates are also excellent candidates for semantic ambiguities when an orders table contains a date column, the date-time can refer to the time of:

  • the basket validation
  • the payment entry
  • the payment clearance
  • the creation of the order in the accounting package
  • the dispatch
  • the delivery
  • the cloture of the order

However, such a shortlist hardly covers actual oddities encountered in real-life situations. Recently, for example, while working for one of the largest European online businesses, we realized that the dates associated with purchase orders did not have the same meaning dependong on the originating country of the supplier factories. European suppliers were shipping using trucks and the date reflected the arrival in the warehouse; while Asian suppliers were shipping using, well, ships, and the date reflected the arrival to the port. This little twist typically accounted for more than 10 days of difference in the lead time calculation.

For business-related datasets, the semantic of the data is nearly always dependent on the underlying company processes and practices. Documentation relating to such processes, when it exists at all, typically focuses on what is of interest to the management or the auditors, but very rarely on the myriad of tiny elements that exist within the company IT landscape. Yet, the devil is in the details.

Data qualification is not data cleaning

Data cleaning (or cleansing) makes most sense in experimental sciences where certain data points (outliers) need to be removed because they would incorrectly “bend” the experiments. For example, chart measurements in an optics experiment might simply reflect a defect in the optical sensor rather than something actually relevant to the study.

However, this process does not reflect what is typically needed while analyzing business data. Outliers might be encountered when dealing with the leftovers of a botched database recovery, but mostly, outliers are marginal. The (business-wise) integrity of the vast majority of databases currently in production is excellent. Erroneous entries exist, but most modern systems are doing a good job at preventing the most frequent ones, and are quite supportive when it comes to fixing them afterwards as well. However, data qualification is very different in the sense that the goal is neither to remove or correct data points, but rather to shed light on the data as a whole, so that subsequent analysis truly makes sense. The only thing that gets “altered” by the data qualification process is the original data documentation.

Data qualification is the bulk of the effort

While working with dozens of data-driven projects related to commerce, aerospace, hostelry, bioinformatics, energy, we have observed that data qualification has always been the most demanding step of the project. Machine learning algorithms might appear sophisticated, but as long as the initiative remains within the well-known boundaries of regression or classification problems, success in machine learning is mostly a matter of prior domain knowledge. The same goes for Big Data processing.

Data qualification problems are insidious because you don’t know what you’re missing: this is the semantic gap between the “true” semantic as it should be understood in terms of the data produced by the systems in place, and the “actual” semantic, as perceived by the people carrying out data analysis. What you don’t know can hurt you. Sometimes, the semantic gap completely invalidates the entire analysis.

We observe that most IT practitioners vastly under-estimate the depth of peculiarities that comes with most real-life business datasets. Most business don’t even have a full line of documentation per table field. Yet, we typically find that even with half a page of documentation per field, the documentation is still far from being thorough.

One of the (many) challenges faced by Lokad is that it is difficult to charge for something that is not even perceived as a need in the first place. Thus, we frequently shovel data qualification work under the guise of more noble tasks like “statistical algorithm tuning” or similar scientific-sounding tasks.

The reality of the work however is that data qualification is not only intensive from a manpower perspective, it’s also a truly challenging task in itself. It’s a mix between understanding the business, understanding how processes spread over many systems - some of them invariably of the legacy kind, and bridging the gap between the data as it exits and the expectations of the machine learning pipeline.

Most companies vastly underinvest in data qualification. In addition to being an underestimated challenge, investing talent on data qualification does not result in a flashy demo or even actual numbers. As a result, companies rush to the later stages of the data analysis process only to find themselves swimming in molasses because nothing really works as expected. There is no quick-fix for an actual understanding of the data.

Categories: Tags: insights, bigdata No Comments

Hiring our Chief Marketing Officer!

Published on by Joannes Vermorel.

We are hiring a lead generation wizard!

Lokad is a software company that specializes in quantitative optimization for commerce. We help merchants, and a few other verticals, to forecast their inventory and to optimize their prices. We are profitable; we are still small but growing fast. We are closing deals North America, Europe and Asia. The vast majority of our clients are not in France.

Lokad is sold through the web, almost exclusively relying on inbound marketing. We have hundreds of leads per month, but we are aiming for thousands. So far marketing was done part time by the founder, but it's time to put marketing in more capable hands.

As the Chief Marketing Officer at Lokad, you will have one metric: the number of qualified leads; and we expect you to own a lead commit as well. At this stage, we do not care about corporate marketing, only lead generation matters. The web is the native marketing channel of Lokad. While other channels can be leveraged, we expect you to steadily increase the presence of Lokad on the web to generate the bulk of lead growth.

Our technology is very noticeable, and we need you to make sure that decision makers do notice. Our reach is the world. Lokad is already available in many languages beyond English.

We are located 50m from Place d'Italie, Paris, France.

Desired Skills and Experience

You have two years or more in lead generation marketing for a B2B SaaS company. With a bit of help from a graphic designer, you can deliver awesome web marketing materials. Your written communication skills are top notch, and big bonus to you if you have a blog with some audience. B2B stuff is usually boring, and non-viral, and yet, you can make things happen: you can vanquish the market inertia and make people pay attention. Naturally, you are perfectly fluent in English. Speaking French is a bonus but not a requirement.

Categories: Tags: hiring No Comments

SkuVault natively integrated

Published on by Joannes Vermorel.

SkuVault is a warehouse management software tailored for eCommerce. We are pleased to announce that SkuVault is now natively supported by Lokad. Importing the SkuVault historical data into Lokad can now be done with a single click - or no clicks at all using our scheduling feature. Now SkuVault-powered businesses can get advance inventory forecasts as well as powerfull commerce analytics within minutes.

Categories: Tags: partners No Comments

Competitive intelligence with Competera

Published on by Joannes Vermorel.

Starting from today, Lokad-powered merchants can obtain the prices of their competitors by using their own Lokad account, thanks to our new partner Competera. The app benefits from a native integration within Lokad.

Competera is a competitive price monitoring app. Give them the domain names of your competitors, the domain name of your own store, and Competera will begin extracting prices right from the web. Competera takes care of generating a price matrix where each one of your products gets matched with the prices of your competitors. Competera works pretty much like Lokad: no software to install and a monthly subscription. You can get a trial and demo as well.

By combining Competera and Lokad, it becomes possible:

  • to stop wasting time with manual and infrequent competitor surveys
  • to monitor how your market share reacts to competitors' pricing moves,
  • to craft pricing strategies that leverage both in-house data and competitors' data
  • to fine-tune the trade-off between profitability and growth

The Competera team is here to deliver all the support your company needs as far as monitoring your competition is concerned. In turn, Lokad is here to turn this data into better margins, better stocks and more growth depending on your strategic targets.

Interested? Just drop us an email, and we will make sure your setup goes smoothly.

Categories: Tags: pricing, partners No Comments

Currency exchange rates with Envision

Published on by Joannes Vermorel.

Merchants frequently buy with one currency and sell using another. As online commerce is becoming more and more a global commerce, it's not unusual to encounter merchants who are buying in multiple currencies, and selling in multiple currencies as well. From a business analytics viewpoint, it soon becomes rather complicated to figure out where the margins stand exactly. In particular, margins depend not only on the present currency conversion rates, but also on those that have been in place 6 months ago.

As part of our commerce analytics technology, we have recently introduced a new forex() function that is precisely aimed at taking into account historical currency conversion rates for almost 30 currencies - including all the major ones.

Lokad's built-in dashboards have already been updated to take advantage of this function. Now, when Lokad carries out a gross-margin analysis for example, all the sales orders and purchase orders are converted into a single currency, applying the correct historical rates used at the time the transactions were made.

Categories: Tags: envision No Comments

In the end, there can be only one

Published on by Joannes Vermorel.

When it comes to the optimization of stock levels, or prices, or assortments … merchants need to look at many business performance indicators to be able to make the correct operational decisions. However, numerical optimization, much like statistical forecasting, is deeply counter-intuitive. In particular, there is a deep and subtle catch when using indicators to optimize an aspect of your business: in the end, there can be only one. Maintaining multiple indicators to drive the final decision that results from an optimization process is a recipe for picking a posteriori the metric that makes the management looks good, while damaging the business in the process. Let’s review how the whole thing unfolds.

There are many indicators that are typically found in commerce. For example, we have the total stock value (the lower, the better), the average inventory service level (the higher, the better), the total sales volume (the higher, the better), the average gross margin (the higher, the better), etc. When looking at just one indicator in isolation, everything is simple: there is an obvious “improvement” direction (e.g. the higher, the better). However, as soon as we consider multiple indicators at the same time, things get more complicated – a lot more complicated.

Indeed, all these indicators are conflicting: lowering the stock value negatively impacts the service levels, increasing the gross margin (nearly always) has a negative impact on the sales volume… Thus, the whole idea of improving one indicator at a time is bunk: this one improvement nearly always comes at the expense of a deterioration. Then, for larger companies, the problem is amplified by the corporate structure itself: the supply chain division is held accountable for any increase in stock, but it’s the contact center division that is rewarded for the improvements in customer satisfaction.

However, the problem does not stop at simply managing conflicting indicators, time is also of the essence, since market conditions are changing all the time and there is a lot of noise involved. As a result, whatever the management might be doing, there are (nearly) always some indicators that will improve from one quarter to the next. Thus, in order to avoid looking bad, it is extremely tempting to cherry pick the indicators deemed as most relevant. With the risk of sounding very technical, it’s a case of ex-post-facto rationalization: we (un)consciously tend to build some good narrative after stuff happens to explain why everything went according to plan.

Therefore, whenever a business optimization initiative is at stake, there can only be “one” indicator that consolidates all the relevant business drivers. For example, as far as inventory optimization is concerned, the pinball loss function is a first step forward towards building an indicator that properly reflects the asymmetry existing between over-forecasting and under-forecasting the future demand. While the pinball loss is far from telling everything about your inventory situation, it can already give sensible results as far as the trade-off between “stock value” vs “service levels” is concerned. Having this “master” indicator is the only way of optimizing just about anything, because as we have seen, when you get the luxury of hand-picking conflicting indicators, everything becomes blurred.

Nevertheless, it is important to clarify that while a “master” indicator is essential, there is no need to discard all the other indicators. Commerce typically tends to be complex, and in order to apprehend this complexity, it typically takes many indicators to gain all the necessary insights. However, these indicators should be used precisely for that: gaining insights, not driving operational decisions.

Coming up with one efficient master indicator is difficult. This indicator should properly balance all the different business drivers intertwined in the problem being addressed. In practice, it is frequently a composite indicator built from a combination of conflicting indicators with strategic “weighting” variables. These variables represent the best strategic understanding that management can produce about their business. Indeed, there is no “quantitative” answer to highly ambiguous questions like: do we want more growth or more margin?

A common pitfall that we frequently observe when designing master indicators is “naïve rationalism”. This refers to indicators that, while being perfectly formalized, do not capture one or more of the essential drivers of a business. As a result, improving such indicators is like accelerating while driving in the wrong direction. Naïve rationalism is dangerous because it gives a false sense of confidence to the people involved. As the saying goes, it’s better to be roughly right than precisely wrong.

Categories: Tags: insights No Comments

Document your folders with MarkDown

Published on by Joannes Vermorel.

Whenever Lokad produces a dashboard or a forecasting report, under the hood, the input data is stored as tabular files within your Lokad account. Those files are accessible through the Files tab of the top navbar.

Screenshot file listing

However, when you're account grow large, with many files and many folders, it might also become a bit messy too. Keeping the data well organized and well documented is a critical part of a good data-driven commerce optimization initiative. Thus, we have been working new features to make it easier.

All the folders of your Lokad account can now be documented with notes written in the CommonMark flavor of MarkDown. When a file named ReadMe.md is found in a folder, then its content gets displayed just above the list of files, as illustrated above.

Screenshot MarkDown editor

Then, if you click the ReadMe.md file, you get a MarkDown editor where you can see side-by-side your notes and their rendered counterpart. If you folder don't have a ReadMe.md, you can create one just by clicking the button Add ReadMe.md button that appears below the file list when your folder doesn't contain such a file yet.

Categories: Tags: bigfiles, markdown, technical, release No Comments

Forecasting 3.0 with Quantile Grids

Published on by Joannes Vermorel.

Delivering better forecasts has always been the core focus for Lokad. Today, we are unveiling the third generation of our forecasting technology based on quantile grids. In layman’s terms, quantile grids demonstrate an unprecedented level of performance which means that your company can service more clients, more reliably, and with less inventory. Unlike all the existing forecasting methods available on the market, quantile grids do not provide one demand forecast per product, but provide the entire probability distribution for (nearly) all possible futures. Quantile Grids are made possible through the combination of Machine Learning, Big Data, Cloud Computing plus some commerce-driven insights.

Quantile Grids are now available in production for all our clients, accessible through a new Quantile Grid option for any inventory forecasting project.

Forecasting 1.0: classic forecasts

When Lokad was founded back in 2008, we started with what we call now classic forecasts, our version 1.0, that is, a forecasting methodology where each product or SKU is associated with a periodic value; for example weekly forecasts of up to 13 weeks ahead. Implicitly, these forecasts are median forecasts: unbiased forecasts are expected to have 50% chances to be above or below the future demand. For the rest of the market, these forecasts are not referred to as the classic forecasts, they are the only forecasts because most of our competitors never even considered any alternatives.

However, as far commerce is concerned, no matter how accurate the classic forecasts, they work poorly in practice. Intuitively, classic forecasts are simply not looking at what really matters. The average or median demand is the easy and uninteresting case where everything goes according to plan. The tough cases however are concerned with unexpectedly high or unexpectedly low demand because they respectively create stock-outs and dead inventory. These types of extreme situations are the ones that really cost money. Classic forecasts work poorly, not because the algorithms are not good, but because they do not look at the business from the correct angle. Thus, no matter how much R&D investment a company can put into classic forecasts, it just fails. This was one of the toughest lessons for Lokad to learn in our early days.

Forecasting 2.0: quantile forecasts

In 2012, we made our first breakthrough with quantile forecasts. Despite a name that might sound downright scary, quantile forecasts are something that is much closer to what executives are doing in for their companies: they are scenarized forecasts. Instead of looking at the average case, quantile forecasts have the following objective: let’s look at the Top 5% of our most optimistic demand prospects, will we suffer from a stock-out? Then, let’s look at the Worst 5% of our most pessimistic demand prospects, will we have to deal with dead inventory? Quantile forecasts directly tackle the tough questions that actually matter from a business perspective. As engineers say, it’s better to be approximately correct than to be exactly wrong, and while quantile forecasts also suffer from all the inaccuracies associated with the classic forecasts, quantile forecasts massively outperform classic forecasts from an operational perspective whenever inventory is involved.

Yet, quantile forecasts are not the pinnacle of forecasting either. On the surface, our quantile forecasting technology was suffering from numerical oddities such as quantile crossing and quantile instabilities. However, since those oddities are quite visible, they can be efficiently mitigated. However, on a deeper level, we realized that our quantile forecasts were still not perfectly aligned with the actual business tough spots. In particular, quantile forecasts are leaving the burden of optimizing the service levels to the Supply Chain Manager. This is cheating - in a way - because a considerable part of inventory performance is actually delivered via a very precise tuning of the most profitable service levels that adequately balance inventory costs and quality of service.

Forecasting 3.0: quantile grids

In February 2015, we are now releasing our second forecasting breakthrough: quantile grids. Over the years, we came to terms with the fact that forecasts can be nothing but imperfect. Accurate forecasts are a fairytale, conveniently repeated within a market overrun by underwhelming vendors. Since we cannot predict the exact future, what about trying to assign a probability to every single possible future? That is, the probability of selling zero units, one unit, two units, etc. This is exactly what quantile grids are about: delivering not only one forecast per product, but delivering the entire probability distribution of demand for every product. Under the hood, quantile grids are a little bit like quantile forecasts, except that a demand forecast is simultaneously computed at all service levels.

Optimizing inventory or managing supply chain is all about balancing risks and opportunities: inventory levels vs service levels, purchase price vs supplier lead time, bulk purchase vs made to order, and so on. While quantile forecasts can pinpoint one or two troublesome scenarios, in the end, it is just one forecast value per product, and no matter how good this value can be, this one value cannot capture all the diversity of possible business outcomes. In contrast, quantile grids tackle the problem head-on: all outcomes are computed and associated with their respective probabilities. For every scenario, such as for the future demand of 3 units, we have bought only 2 units, it then becomes possible and straightforward to compute the net business result – like 2 units sold, and 1 unit missed. As a result, every purchase decision can be assessed by simply unrolling all the scenarios and applying the calculated probability to each scenario.

A breakthrough coming from aerospace

While Lokad primarily services retailers, we do such other industries as well, such as aerospace. One year ago, we started working for a large joint venture between AirFrance Industries and Lufthansa Technik, and realized that our quantile forecasting technology was not entirely up to the challenge. Each quantile forecast is like a single business scenario. While it is possible to combine 3, 4 or 5 different business scenarios, it takes great effort to implement the rules that glue all of these scenarios together in order to produce optimized supply decisions.

A much more elegant solution, and one which also yields much better inventory performance, consists of forecasting and evaluating all future business scenarios. No more ad-hoc scenarios that we desperately try to weld together, but a listing of (nearly) all possible scenarios (granted, it is a long listing), all treated in a simple and uniform way. This approach comes with the downside of being brutally more demanding as far computing resources are concerned. However, thanks to our favorite cloud computing platform - Microsoft Azure - computing resources have never been cheaper, and prices are still in free fall.

The results that we obtained through quantile grids for aerospace proved to be dwarfing the performance of our flagship quantile forecasting technology. It was time to bring back the rocket science (well, not rockets, jet airliners actually) to merchants, and the multiple experiments that we had performed over the last couple of months confirmed the decisive superiority of quantile grids compared to our original quantile forecasts.

Future of predictive commerce optimization

When we first released quantile forecasts three years ago, I predicted that within 10 years, quantile forecasts would be the default tool for any supply chain practitioner serious about her inventory performance. Well, it turned out that the efforts of the entire Lokad team, including my own, proved me wrong. As we have uncovered an approach superior to our initial quantile forecasts, we came to the conclusion that the long-term future of quantile forecasting is brittle. Yet, the future of the descendant of quantile forecasts is brighter than ever, as quantile grids solve the challenges that had been eluding us for years, such as the optimization of service levels, container shipments or multi-sourcing strategies.

Also, for years, inventory forecasting and pricing optimization have been treated in strict isolation, as if they were parts of two separate puzzles: the demand forecasting engine ignored what happens on the pricing side, and to make them even, the pricing engine did not care about the supply chain constraints either. However, stocks and prices are two sides of the same coin; and we now realize that any optimization attempt that blindly ignores the other side of the coin is a naive attempt at best.

Thus, while I will avoid making the same mistake and predicting that quantile grids are the long-term future of forecasting only to be proved wrong by Lokad’s team later on, I will now more safely bet that whatever predictive technology emerges from our efforts, pricing analysis will probably become unified with stock analysis along the way. We are not quite there yet, but we are making steady progress in this direction.

New methodology: purchasing prioritization

All inventory optimization systems (Lokad 2.0 included) compute reorder points. By comparing reorder points with quantities on hand and on order, these systems also compute suggested reorder quantities. Over the years, we have discovered two major limitations of this approach. First, those systems do not say anything about the target service levels, and their optimization. Second, reorder points prove to be somewhat inflexible whenever purchase constraints are involved.

Inventory optimization systems traditionally produce a static set of reorder points (one per SKU), primarily driven by their respective user-defined service levels. However, this is cheating because the burden of figuring out the "optimal" service level falls back to supply chain planner; and not only does figuring out the correct service levels proves to be a very time-consuming exercise, it is also a source of major inefficiencies if the service levels are inadequately chosen.

With quantile grids, the picture is very different: a master purchase priority list is calculated. Technically, it is a list where each SKU appears on numerous lines, each line being associated with a suggested order quantity – typically 1 unit if no supply constraints are present. The list is prioritized, and this prioritization criterion is of primary importance.

For most businesses, this prioritization answers the question: for $1 of extra inventory what is the next unit which gives the company the highest returns? This can also be formulated as the expected gross margin minus the expected inventory carrying costs. Naturally, as we go through down the list, the expected gross margin sharply decreases, because the probability of having a demand sufficiently high to absorb the stock becomes very thin. Similarly, when going down the list, the inventory carrying cost sharply increases, as each extra unit of inventory is expected to remain in the warehouse for longer. In theory, the list has no end, as it goes down to infinity. In practice however, we simply stop at a point well beyond what would constitute "reasonable" inventory levels. When a purchase is made, the goal is not to go down the list but to buy items according to their respective priorities, and to stop buying once the spending target is reached.

Consequently, this entirely removes the need to specify the service levels. Once a spending budget is defined, a company purchases their goods based on the priorities established by the master purchase priority list. Purchasing goods in this order ensures that the company’s revenues or profits are maximized, following the specified prioritization criteria.

Quantile grids are also much more versatile in their capacity to address scenarios that involve supply constraints. While quantile forecasts are indeed powerful, as soon as you have minimal order quantities, either per SKU or per supplier, and possibly some container volume capacity constraints too, the suggested quantities do not match the supply constraints. And it is then up to the supply chain planner to deal with all the adjustments, namely removing certain SKUs or increasing the units for other SKUs, in order to compose a complex order batch that fulfills all the constraints.

With quantile grids, we have a much more compelling and a much more straightforward user experience to propose. The master list makes it simple to accommodate ordering constraints. If minimal ordering quantities per SKU are present, then, the ineligible lines can be removed from the list. Similarly, if a target capacity constraint exists to accommodate container shipments, then purchase entries can be processed in following the order of the list until the target capacity is reached.

What’s next?

While quantile grids are already live and accessible to all companies who have an open Lokad account, we are still lacking documentation that outlines both the technical aspects, but also the supply chain best practices relating to this new technology. This material is coming. Stay tuned.

Categories: Tags: forecasting, release No Comments

Wanted: competitive price monitoring partner

Published on by Joannes Vermorel.

We are seeking an awesome competitive price monitoring app that would be natively integrated into Lokad. Our goal is to offer a 1-click solution to Lokad-powered merchants to allow them to obtain their competitors' prices.

If you happen to know such a solution that you would like to see implemented in Lokad, don’t hesitate to forward this post to a relevant contact. If you happen to work in such a company, well, read on.

A bit of context

Lokad specializes in helping merchants to improve their prices and their stocks, the two challenges being fairly entangled in practice. In order to do this, we leverage advanced predictive analytics, packaged in a way that make them accessible even to very small companies.

What we don’t do, however, is to retrieve prices from competitors by directly crawling the web. Thus, many of our clients sent us their competitors' price data that they obtain from other software companies. Since competitive intelligence is a basic need in commerce, Lokad is ought to provide something better than Bring Your Own Data. However, it turns out that crawling the web is a very specific challenge: our existing technology would not help one bit.

As there are dozens of price monitoring solutions out there, let’s not reinvent the wheel. We would like to integrate one of these apps, the best we can find, into our existing solution so that competitive intelligence becomes a 1-click upgrade at Lokad.

Properties of the partnership

The partner gets a super-qualified sales channel: the Lokad user base. Other data challenges have already been taken care of, and prices are ready to be consumed.

The price monitoring app is an option available to all our users; and Lokad actively promotes this option not only on our website, but directly on our own app as well.

We don't expect this service to be free. We don't plan to interfere with what you, as a partner, charge our mutual clients either. Finally, we are not asking for revenue share.

The DNA of our awesome partner

We seek an awesome price monitoring app. Naturally, it would be in SaaS mode and subscription-based, but we feel it should also have the following properties:

  • Free trial: technology is sufficiently automated to open up the possibility of free trials. For us, the absence of a free trial is a clear indication of consultingware solutions where the provider starts coding the day a client signs up for the service.
  • Focus on scale: just about any coder can hack a price scrapping bot in a week targeting a small ecommerce; however, this does not scale. Our “dream” user experience: a merchant asks for “contoso.com” and more than a year of prices is already available; then, the merchant asks for a comparison with “frabrikam.com”, and within 60 minutes, the full comparative price list is compiled and ready for download.
  • API: we are looking for 1-click integrations, well maybe a bit more than 1-click, but it has to be dead simple. Thus, Lokad needs to be able to programmatically interact with the app so that our users don’t have to deal with integration technicalities themselves. If no API exists yet, we expect our partner to provide an API when we start working together.

We are seeking a partner that thinks of their company as the (future) Google but for price search. If you are interested (or know someone who might be), drop us an email at contact@lokad.com.

Categories: Tags: pricing, partners No Comments

Salescast and Priceforge have been merged

Published on by Joannes Vermorel.

Innovation is a messy process. If we knew the final destination from the start, the path to getting there would be a lot less convoluted. Lokad is no exception. Over the last year, we constantly kept innovating, right from the beginning of the year as far as inventory forecasting is concerned, and have moved on to pricing optimization more recently. And along the way, we introduced a series of apps, Salescast and Priceforge being the most important ones.

Over the last couple of months, our apps were gradually converging, unlocking multiple synergies along the way. Today, we are proud to announce that the user interfaces of both Salescast and Priceforge have been unified into a single Projects view.

This new design reflects the fact that Salescast and Priceforge are intended to be used together, both for inventory forecasting and also for pricing optimization. Our old marketing message Salescast is for forecasting and Priceforge is for pricing had become misleading because in practice, all of our best accounts were using both apps together.

The user interface has been merged, but our website and our documentation are still lagging behind. In the coming weeks, we will revisit our website in its entirety in order to present the app as it is now available.

Categories: Tags: priceforge, salescast No Comments