Auto-refresh your data from Salescast

Published on by Joannes Vermorel.

The user interface of Salescast had barely changed since the last major upgrade we shipped two years ago. However, under the hood, Salescast had been undergoing steady changes to improve reliability and performance.

This summer, we released native support for Brightpearl, Linnworks and TradeGecko. However, those new capabilities of Lokad were not integrated into Salescast, and, as a result, generating a new forecast report required 4 steps:

  1. Go to Sync in Lokad, and trigger a refresh.
  2. Wait until the data refresh is completed.
  3. Go to Salescast in Lokad, and trigger a run.
  4. Wait until the forecast reported is generated.

Obviously, the steps 1 and 2 were less than desirable. Thus, we opted for a more drastic revision of the Salescast user interface. The webapp has now a Project creation wizard that let you directly bind a data source with your Salescast project.

Once the data source is bound to the project, any Salescast run will start by automatically refreshing the data source. This entirely removes the two convoluted steps 1 and 2 as detailed above.

If you have been using Salescast already and if you wish to benefit from this new feature, you need to delete your existing Salescast project - look for the Delete button below the settings in the project view - and then to re-create a project. If you have already configured a data source in the Sync tab, then Salescast will offer you the possibility to use this data source directly, leveraging the existing configuration.

We have another major evolution for the user interface of Salescast in the development pipeline. Stay tuned.

Categories: Tags: salescast, release

Job: Quantitative Business Analyst

Published on by Joannes Vermorel.

Once again the Lokad team needs to expand. This time we are seeking a quantitative business analyst.

Job description

Your goal will be to drive commerce companies - our clients - to improve their performance when tackling a variety of quantitative challenges such as inventory forecasting or pricing optimization. In order to achieve this goal, you will benefit from the technologies that Lokad has developed, and you will also benefit from a direct mentoring from the Lokad founding team. You will be reporting to the COO of the company.

Your contributions will be varied:

  • deciphering ins and outs of businesses and assist the clients in extracting the data actually relevant to the resolution of their challenge.
  • communicating a vision of how to best address the business goals considering the technologies available and through a realistic use of the data.
  • exploring the data of the client and assessing both potential data problems and potential data usages aligned with the clients’ business goals.
  • implementing some quantitative optimization logic, along with the corresponding workflows to consume the results produced by Lokad.

The quality of your contributions will have a significant impact on the business value generated by Lokad for the client.

Most of our clients operate either in North America or Europe. You will primarily communicate with them by email and phone. Once in a while, a large client may require an actual meeting to take place, but this is rather the exception than the norm. From the client perspective, Lokad will train you to become the Lokad expert who manages their account.

This position is in our office in Paris (13th arrondissement). This job is not eligible for remote work. Salary depends on the experience and subject to negotiation. We’d prefer at least 1 or 2 years of experience.

Desired Skills

Excellent communication skills, both oral and written, and both in English and in French, are necessary. Most businesses are satisfied with abysmal writing, producing documents so boring and confusing that the documents are not even worth the time it takes to read them. You will be expected to be able to produce sharp, technical and well-written reports.

We do not expect you to have any prior knowledge on commerce optimization. However, you will be dealing with a lot of data. If you happen to be a wizard at Excel calculations, this will strongly play in your favor. Also, while programming skills are not required, even modest skills in this area would already be a big plus.

We do not expect you to have much prior knowledge on statistics, especially about the modern statistics flavor favored by Lokad, however, we expect you to have a sharp analytical mind, and to be “good with numbers” in general. “Fixing” a broken quantitative optimization process frequently boils down to pinpointing the one incorrect calculation step among a dozen steps or more.

To apply just drop a mail to contact@lokad.com with your resume.

Categories: Tags: job

Inventory forecasting for Aerospace

Published on by Joannes Vermorel.

While the core focus of Lokad’s activity has been on commerce since the very beginning, over the years, we have also delivered forecasts and optimized stock levels for a variety of other verticals. Some verticals prove to be more challenging than others as far forecasting is concerned, and the aerospace industry with its low rotations, its highly expensive parts and its costly stock-out incidents - i.e. grounded aircrafts waiting for a missing part - is certainly one of the most challenging verticals in terms of forecasting. In particular, classic inventory forecasting models tend to work very poorly for aerospace, primarily because the underlying assumptions behind these classic models (normal distributions, Poisson distributions, weekly or monthly forecasts) completely misfit the actual statistical patterns observed in aerospace.

Over the last 6 months, we have re-engineered a brand new forecasting engine purely dedicated to aerospace. At its core, this forecasting engine also leverages quantile forecasts, because this type of forecasting is about the only class of statistical models that actually works for aerospace. However, unlike our initial forecasting engine targeted at commerce, this variant natively integrates the logistics associated with high-cost repairable parts where components are first changed and then repaired. In particular, the TAT (turn-around times) are also modeled through quantile forecasts. In addition, among other industry-specific factors, fleet composition over time, including known future evolutions, is also natively integrated into the forecasting engine.

Considering the complex structure of the aerospace market, Lokad does not offer a packaged inventory forecasting solution readily accessible online as we do for commerce. However, if you are interested in forecasting for aerospace, don’t hesitate to drop us an email anytime at contact@lokad.com.

Categories: Tags: aerospace

Book: Quantitative Commerce Optimization

Published on by Joannes Vermorel.

We have just released a book!

Quantitative Commerce Optimization with Envision and Priceforge

Our app Priceforge has been designed to support all kind of commerce centric data calculations and data visualizations. While it started with a focus on pricing, it can do a lot more.

This book is addressed to commerce executives and commerce analysts who want to harness the power of technology to bring their company to the next stage of profitability.

Many if not most of the challenges faced by commerce are best addressed through quantitative numerical analysis. The advanced optimization of prices, stocks, assortments, promotions, campaigns, and customer loyalty are but a few examples of what can be achieved through data analysis and automated or semi-automated decisions. In commerce, there are simply too many products, clients, suppliers, and competitors. Sticking to manual processes is invariably too costly and too slow.

The first part of the book is a manual to get started with Envision, the programming language dedicated to commerce data processing and visualization by Lokad. The second part of the book is purely dedicated to pricing strategies for commerce.

Categories: Tags: books, commerce, priceforge

Top 10 Oddities in Demand Forecasting

Published on by Joannes Vermorel.

Statistical forecasting is a highly counter-intuitive field. And most assumptions which may seem intuitive at a first glance, turn out to be plain wrong. In this post, we compile a short list of the worst offenders among all the statistical oddities that make the bread and butter of Lokad’s business.

1. Advance forecasting systems DO NOT learn from their errors

Forecasting systems typically refresh their forecasts on a daily or weekly basis. Every time a new batch of forecasts is produced, a forecasting system has the opportunity to compare its older forecasts with the newly acquired data, and possibly learn from this. As a result, it would seem highly reasonable to expect any given forecasting system to learn from its errors, just as a human expert would do. However, this is not the case. An advance forecasting system will NOT try to learn from its errors. Indeed, better methods are available out there, namely [backtesting] (http://www.lokad.com/backtesting-definition), which offer superior statistical performance. With backtesting, the system re-challenges itself against the entire history available every time a forecast is generated, not just against the latest increment of data.

2. Most important statistical factors are noise and randomness

When practitioners are asked about the dominant factors in their demand, many answer: seasonality, product lifecycle, market pressure, business growth, etc. However, most of the time, there is an elephant in the room: the elephant being statistical noise found the in observation of demand.

Most of the time, the forecasting challenge is addressed as if, given sufficient efforts, demand forecasts could be made accurate. Yet, this viewpoint is incorrect, as most of the time forecasts are irreducibly inaccurate. Embracing the randomness found in the demand usually yields better business results than trying to eliminate this randomness.

3. Expert corrections generally make forecasts less satisfactory

While it seems reasonable to manually adjust statistical forecasts with industry-specific insights, we have observed, on many occasions, that this practice does not yield the desired results. Even when manual corrections are performed by an expert in this field, they just tend to degrade the overall accuracy, unless the underlying forecasting systems are inherently poor. Only in this case, can manual corrections help improve forecast results.

This is often linked to the fact that human perception is heavily biased towards the perception of “patterns”. Frequently this leads to false perceptions of trends, which are nothing more than random business fluctuations. Mistakenly interpreting randomness as a “pattern” tends to generate much more significant errors than just ignoring the pattern in the first place, and treating it as mere noise.

4. Forecasting error must be measured in Dollars

A more accurate forecast does not necessarily translate into better business results. Indeed, the classic way to look at forecasts consist of optimizing metrics such as MAPE (mean absolute percentage error) that are only weakly correlated with main business interests. Such metrics are misleading because they originate from rather delusional thinking that if forecasts were perfectly accurate, then MAPE error would be at zero. However, a perfectly accurate forecast is not a reasonable scenario, and the whole point of using a performance metric is to have it aligned with the interests of any given business. In other words, forecasting error should be expressed in Dollars, not percentages. Daily, weekly and monthly forecasts are not consistent.

If forecasts are produced both on a daily basis and on a weekly basis, it would be highly reasonable to expect that if the daily forecasts are summed into weekly forecasts, then both forecasts converge on the same values, given that the same technology and the same settings have being used to generate the two sets of forecasts in question.

Unfortunately, this is not true, and the two sets of forecasts will diverge; and for very sound statistical reasons too. In short, daily (resp. weekly) forecasts are optimized against a metric expressed at the daily (resp. weekly) level; statistically, as these two metrics are different, the numerical outputs of the numerical optimization have simply no reason to match.

5. SKU-level forecasts do not match category-level forecasts

If the same forecasting system is used to forecast demand both at the SKU level and at the category level, one would expect the two sets of forecasts to be consistent: by summing all the forecasts associated with the SKUs that belong to a given category, it would not be unreasonable to imagine ending-up with the same number than the forecast relating to the category itself. This is going to be the case for the same reasons as those outlined in the previous paragraph.

Even more alarming, it is actually very common to observe rather odd situations where completely divergent patterns exist between forecasts at the SKUs level and at category level. For example, all SKU forecasts might be strictly decreasing, while forecasts at the category level are steadily increasing. Another typical case is seasonality, which is very visible at the category level, but barely noticeable at the SKU level. When a similar situation arises, it may be tempting to try to correct SKU-level forecasts in order to align them with category forecasts, but such a technique would only degrade the overall accuracy of the forecast.

6. Changing the unit of measurement does matter

At a first glance, the unit used to measure the demand should not have any impact. If demand is counted in inventory units, and if all the points in the history are multiplied by 10, then one would expect all forecasts to be multiplied by 10 without any additional consequences. However, with technology such as that developed by Lokad, the forecasting process is not going to happen this way, at least, not exactly in this way.

Indeed, an advance demand forecasting technology does leverage many tricks using small numbers. The quantity of 1 is not just any quantity. For example, we have observed that, on average, supermarket and hypermarkets receipts come with more than 75% of their lines with a quantity equal to 1. This results in many statistical tricks being related to “small numbers”. Multiplying any given demand history by 10 would just confuse all the heuristics in place for any advance commerce forecasting system.

7. Best promotion forecasts are frequently generated when promotions are ignored

Forecasting promotions is difficult, really difficult. In retail, not only can the demand response to a promotion go from nothing (no uplift) to a 100x uplift, but the factors that influence promotions are complex, diverse and usually not accurately tracked in IT systems. Combining complex business behaviors with inaccurate data is a recipe which is likely to lead to a “Garbage In, Garbage Out” problem.

In fact, we have routinely observed that discarding promotional data was, at least as a very humble initial approach, the least inefficient way to forecast promotional demand. We are not claiming that this method is highly satisfying or optimal, but are merely trying to demonstrate that a native forecast built on correct but incomplete historical data usually outperforms complex models build on more extensive but partially inaccurate data.

8. The more erratic the history, the “flatter” the forecast

Visually, if historical data exhibits strong visual patterns, then one would expect a forecast to exhibit similarly strong visual patterns. However, whenever historical data exhibits erratic variations, this expectation does not hold, and the reverse happens: the more erratic the demand history, the smoother the forecasts.

Again, the root cause here is that the human mind is geared towards the perception of patterns. Erratic fluctuations are not patterns (in the statistical sense) but noise, and a forecasting system, if designed correctly, behaves precisely like a filter for that noise. Once the noise is removed, all that often remains is just a “flattish” forecast.

9. Daily, weekly and monthly forecasts are usually unnecessary

Periodic forecasts are everywhere - from business news to weather bulletins; and yet, they rarely represent an adequate statistical answer to “real-life” business challenges. The problem with these periodic forecasts lies in the fact that instead of directly tackling the business decision that depends on some uncertain future, they are typically leveraged in some indirect way to construct the decision afterward.

A much more effective strategy consists of a thinking about business decisions as being the forecasts. By doing so, it becomes much easier to align forecasts with specific business needs and priorities, e.g. measuring the forecast error in Dollars rather than percentages as detailed above.

10. Most of inventory forecasting literature is of little use

When confronted with any difficult subject, it is reasonable to begin exploring this subject by investigating the different peer-reviewed materials available in scientific literature. Especially since thousands of papers and articles are available to the reader, as far as demand forecasting and inventory optimization are concerned.

Yet, we have found that the quasi-totality of the methods analysed in such literature just do not work. Mathematical correctness does not translate into business wisdom. Many models considered as all-time classics are just plain dysfunctional. For instance,

  • Safety stocks are false, since they are based on normal distribution assumptions,
  • EOQ (economic order quantities) are inaccurate, as they are based on flat-fee per order that is completely unrealistic,
  • Holt-Winters, is a forecasting model which is quite unstable numerically and requires too much historical depth to be tractable,
  • ARIMA, which is the archetype of mathematically-driven approach, is far too complicated for too little results,
  • etc.

Oddities in demand forecasting are (probably) countless. Don't hesitate to post your own observations in the comments section below.

Categories: Tags: insights, forecasting

Lokad is hiring: junior and senior developers

Published on by Joannes Vermorel.

Business is growing and moving fast. A few weeks ago we moved to new offices twice as large as the previous ones. More than ever, we are seeking talented software developers. We have two open positions in Paris: a junior and a senior.

While working with us, you will focus on big data apps. We are technology-driven company. We pay a lot of efforts in crafting highly interesting (and challenging too) bits of technology. Our apps are lean and focus on the quantitative optimization of commerce. At Lokad, you won't end-up being a drone adding the 1001th feature to a shapeless piece of enterprise software.

From a practical perspective, we have bright offices, with transparent panels to keep the noise low and the concentration high. We buy the best tools that money can buy, hardware and software. You get free coffee and free cookies - well, at less when we don't mess up our own replenishment.

You will benefit from a small but highly capable and highly experienced development team which will help you bring your development skills to the next level. Also, we remain a small-sized company where individual contributions actually contribute to the success of the company. Salary will be competitive and depends on experience.

Apply now by sending your resume to contact@lokad.com.

Categories: Tags: hiring

Mitigating supplier stockouts

Published on by Joannes Vermorel.

Most inventory optimization processes are approximate in the sense that the propensity of the suppliers to face a stockout is not modeled. This approximation simplifies a lot the analysis, and as long suppliers have service levels that are substantially higher than the target service levels of the downstream retailer, distortions introduced in the inventory analysis are minimal. However, if the retailer seeks service levels higher than the ones offered by its supplier, then things get more complicated, and a lot more expensive inventory-wise too. Let’s briefly review how to mitigate supplier stockouts.

From a pure inventory control perspective, associated with the quantile forecasting insights, assuming there is only a single supplier available, the correct way to model the supplier stockouts consists of adjusting the lead time. Indeed, when the stock is not readily available on the supplier-side, the retailer needs to wait until the inventory is renewed to get its next replenishment under way. Thus, in order to account for the potential supplier stockouts, the applicable lead time is not anymore the ordering delay plus the shipping delay, but the same plus the supplier’s own lead time.

Frequently in practice however, the lead time of the supplier is much larger than the typical lead time of the retailer. Such situations happen for example when the supplier is a wholesaler importing from Asia. In such conditions, trying to achieve a service level greater or equal to the one of the supplier proves to be a costly exercise because the lead time can be increased several times to match the one of the supplier. As a result, it’s not infrequent to observe that the stock would need to be more than doubled as well as a direct consequence of this increase of lead time.

One typical way to mitigate supplier stockouts without resorting to drastic inventory increase consists of introducing some redundancy, either within the offering itself, or by diversifying the suppliers.

Redundancy in the offering happens when some goods being sold are similar enough to be considered as substitutes. The presence of substitutes, even imperfect ones, mitigate the supplier’s stockouts - as well as the retailer’s own stockouts – by reducing the damage as a certain fraction of the demand can be redirected to the substitute products when the other one is missing. One drawback of this approach is that, frequently, unless when dealing with quasi-perfect substitutes, it’s hard to assess whether two distinct products will indeed be perceived as actual substitutes by the clients. Ideally, this would require a statistical analysis of its own. Also, too many substitutes can clutter the offering, making it less appealing to the clients in the end.

Redundancy on the supplier-side typically involves secondary suppliers selling at higher prices because overall purchase volumes are smaller. Those suppliers serve as back-up if the primary suppliers cannot readily serve the products. The primary benefit of this approach is an extra available obtained for the exact product that clients seek. Then, one potential major drawbacks lie in the correlation that exists between the levels of inventory of the various suppliers. Simply put, if one supplier goes out of stock for a given item, then chances are that the market demand for the product has been surprisingly large and as a consequence most of the other suppliers will go (or have already gone) out of stock too.

Categories: Tags: insights, inventory optimization

Faster and better Priceforge

Published on by Joannes Vermorel.

Over the last couple of weeks, we have deployed several incremental releases for Priceforge, our pricing optimization webapp. Below, we review some of the latest additions.

Faster data import. Our parsing logic has been largely rewritten for better performance. The app is now loading the data at roughly 30MB/s when processing large flat files. This is about 8x faster than our initial implementation.

Better development UI. The code editor now displays in the sidebar the types of the columns loaded from the flat files, as inferred from the Envision script itself. This is particularly handy to troubleshoot complex scripts. Also, the code editor now warns you if you are about to run a script that mismatches the data as observed during the last run.

More expressive scripts. The WHERE clause can now be used to filter any event stream, not just items. A new function named concat (for string concatenation) is introduced, and the title value for any tile is now treated as regular string scalar.

Several important additions are still being implemented in Priceforge which should make Envision - our scripting language - even more expressive. In particular, we have started working on time-series operators such as lag or integral Those are particularly usuful in order to compare a strategy (pricing/stocking) with an alternative over distinct periods of time. Stay tuned.

Categories: Tags: priceforge, release

Sync by Lokad, native Brightpearl support

Published on by Joannes Vermorel.

Moving data around is the Achilles' heel of most commerce optimization projects; no matter if it's stocks, prices or staffing being optimized. Over the last year, and thanks to the support our partners, data integration had become a lesser pain for companies using the Lokad technology, but still, it remained a hassle.

Today, we are proud to announce the immediate availability of Sync, a web app dedicated to the 1-click import of all the relevant business data either for Salescast (inventory optimization) or for Priceforge (pricing optimization). Data are retrieved directly from the third party app - typically through its Web API.

This initial version supports Brightpearl, an awesome commerce management solution. Just create a new project with Sync:

Click Run and a few minutes afterward, the data files are available within BigFiles, our file hosting app, ready to be crunched by our other apps:

For the duration of the beta, Sync is made available for free. We have not taken any final decision concerning the final price, but it will be a very affordable monthly subscription.

This release is only the first step of many. Need something more? Don't hesitate to drop us an email.

Categories: Tags: sync, brightpearl, release

Top 10 lies of forecasting vendors

Published on by Joannes Vermorel.



Prediction is very difficult, especially about the future. Niels Bohr


Lokad has been in the forecasting business for more than half a decade now, and while we took an immense pride in sticking to what we believe to be the truth, we did witness countless times competitors getting away with unabashed lies. Hence, for the record and for the fun of it, let’s list the top 10 lies of forecasting vendors.

1. Our forecasts are accurate

No, they aren’t. Statistical forecasts might be the "best" forecasts available but qualifying those forecasts as “accurate” is, at best, delusional. Statistical forecasting is the technical equivalent of driving a car while looking in the rearview mirror. When it’s the only option available, it has to be done, but overall, it’s a messy exercise. Delusional vendors push clients toward fragile setups which rely too much on a promised accuracy that, reasonably, won’t ever be delivered. For most businesses, the only viable solution consists of embracing the fact that the future is uncertain, and planning accordingly.

2. We will forecast your promotions

We have already written extensively about promotion forecasts. In a nutshell, getting promotion forecasts right is insanely hard. In particular, it takes a massive data qualification effort to have anything exploitable for statistics. The problem is even harder for ecommerce where the visibility given to any product can go from nothing to front page placement in minutes – and back to nothing in minutes as well. Claiming that a forecasting technology is good enough to forecast promotions - during the course of a vendor selection process – only proves that the vendor is dangerously little experienced with actual promotion forecasts. More seasoned or more honest vendors would argue there is not a single chance that such a promotion forecasting benchmark does not end-up in a major garbage-in garbage-out exercise.

3. We are experienced with your vertical

Vendors are so experienced that they don’t hesitate to propose forecasting methods that are guaranteed to fail such as daily or weekly median forecasts in case of (e)commerce. At Lokad, it took us a few years to realize that median forecasts were irremediably dysfunctional as far inventory optimization was concerned. We started to realize the depth of the problem the day we started to observe the actual impact of our forecasts on the business of our clients. It’s wasn’t pretty but it proved very enlightening. If the forecasting vendor does not emphasize that your reorder points and reorder quantities are the only relevant forecasts for inventory optimization, the vendor is either deceptive or clueless.

4. Our technology is intuitive and efficient

Statistical forecasting is so intensely counter-intuitive that there is no way to have both something intuitive AND efficient. It’s either intuitive and completely incorrect, or unsettling and possibly correct. It’s just the way the human brain works. We are not geared for a correct perception of randomness. We see patterns everywhere when it’s just statistical noise. What makes statistical forecasts efficient against human "expert" forecasts is precisely the fact that while the forecasting models are pretty dumb by human intelligence standards, when correctly designed, those models are not biased.

5. You will get X% less stock and Y% less stock-outs

Enterprise software vendors absolutely love case studies claiming ground-breaking results. One private joke at Lokad is that, in order to catch up with competitor claims, we should also start claiming that our software cures cancer too. The reality is that the immense majority of those case studies are utter garbage. Numbers are made up, testimonials are made up and observed improvements, if any, are vastly unrelated to the forecasting solution. The customer is happy because the case study gives them some good press and drive their own competition nuts. Combo: by claiming very loudly that the super-expensive enterprise solution XYZ was the best thing that ever happened to your business, you drive a competitor in doing the same outrageously expensive mistake.

6. You will have the freedom to tune your forecasts

And the bullet that you will shoot in your own foot comes free of charge; and you should have a look at the incredible health care plan that you can buy from us too. Giving the opportunity to the client to have full control on its forecasts is not technically a lie, but it remains an extremely deceptive move when considering the consequences. It’s a case of moral hazard. In case of success, the technology of the vendor gets praised. In case of failure, the incompetent staff of the client gets blamed. Worse, if the first attempt fails, then there is no alternative but to purchase some extra training sessions from the vendor. Obviously, there is some serious money to be made in prolonging the problem.

7. We use state-of-art technologies

As far statistical learning is concerned state-of-the-art technologies are found in voice recognition, face recognition, spam filtering and all the other ultra-horizontal machine learning problems where companies like Google, Microsoft and Apple are investing hundreds of millions. It might be humbling to say, but demand forecasting is a somewhat niche business that does not justify hundreds of millions of R&D investments. As a result, even the most technologically aggressive companies like us, are about a decade behind what could be really considered as state-of-art in machine learning. However, the reality is that most forecasting vendors are half-a-century behind state of the art: ARIMA, Holt-Winters and Box-Jenkins models were already there back in 1970.

8. A forecasting benchmark cannot lie

Statistics are certainly the most refined way to lie. Lies can be made up from numbers in all sort of ways. At Lokad, we have faced several benchmarks where returning zeroes as forecasts for all products would have ensured an outstanding victory in the benchmarking process. Yet, setting all inventory levels at zero does not really strike as a wise business option. Whenever a vendor claims X percents of error, you should really ask yourself how many dollars of errors are you going to get. Minimizing percents of error is a very different thing than measuring the dollars of error. Believing that the first drives the latter is vastly mistaken – but vendors won’t hesitate to claim the exact opposite because percents of error are much safer grounds to decline any responsibility in the subsequent mess.

9. Our software is the best

The vendor website looks like it was designed in the late nineties; it’s not possible to try the software online; there are no screenshots, no public pricing, no public documentation and no API either, but, trust us, it’s really good software. As Riley said, When I see a bird that walks like a duck and swims like a duck and quacks like a duck, I call that bird a duck. Software is one of those businesses where there is absolutely no barrier to selling everything online. Unless your software is deeply dysfunctional, you have zero incentive in keeping it hidden. Decent vendors don’t argue they are the best, they just let clients try and see for themselves.

10. We manage your inventory AND your forecasts

Yes, sure. And some chess masters happen to be soccer champions too. Software is all about focus, and what it takes, as a company, to be good at designing, say, an ERP or a WMS, is about the exact opposite of what it takes to be good at designing a forecasting software. ERP companies make poor forecasting tools, simply because forecasting is just one add-on among dozens if not hundreds. Despite whatever vendors might say, forecasting cannot be the No1 priority of an ERP vendor; it’s a second-class citizen by design. Respectively, if you have a capable team of data scientists, you can’t keep them driven to design the hundreds of screens it takes to have a full-fledged ERP.

Categories: Tags: insights, forecasting

Web API and file exports for Priceforge

Published on by Joannes Vermorel.

Priceforge, our dashboarding and pricing webapp, is under rapid evolution. In particular, two new features, related to process automation, have just been put in production. With those features, it is now possible to design a completely automated setup, where every day and without any manual intervention, dashboards get refreshed and where revised prices get imported back into the business systems.

Web API

Inspired from the design already in place for Salescast, Priceforge has now its own Web API (Application Programing Interface). The purpose of this API is to offer the possibility to programmatically control the execution of your Priceforge projects from a remote system.

For example, with this API, it becomes possible to write a script that runs outside Priceforge, which first uploads the latest data by FTP toward BigFiles - our file hosting service - and second, that triggers the execution of the relevant Priceforge projects.

This API follows usual REST patterns with JSON-formatted messages. For now, there are two methods:

  • /api/startrun which triggers a project execution.
  • /api/projectstatus which details the state of a project.

File export

Priceforge can perform advance calculations to compute revised prices or optimized item display ranks - our technology not limited to prices only. However, to make the most of Priceforge, those data need to be imported back into the relevant business systems, Magento or Prestashop for example.

Priceforge now supports a built-in mechanism to export data throught the file tile. A tile - in Priceforge - is one of the elementary blocks that compose a dashboard. For example, Priceforge supports very visual tiles such as barchart or linechart.

The tile of type file has two outputs. First, the tile gets displayed as a plain block within the dashboard. If this tile gets clicked, you download the file. Second, the file is pushed back to BigFiles at the specified location. For example:

show file "/foo/my-prices.tsv" with Id, Label, Price

This tiny Envision script defines a tile of type file which produces a file named my-prices.tsv that is pushed to the folder named /foo in BigFiles. This file then becomes available for download through FTP.

Categories: Tags: priceforge, api, bigfiles

How to mitigate overfitting when forecasting demand?

Published on by Joannes Vermorel.

Just noise, no signal Our video about overfitting received its share of attention since it was published 5 years ago, that is, a half a century ago for a startup like Lokad. Years later, we have made a lot of progress but overfitting remains a tough matter.

In short, overfitting represents the risk that your forecasting model is accurate only at predicting the past, and not at predicting the future. A good forecasting model should be good at predicting the data you do not have.

A common misconception is that there is no other way to assess a model except by checking its performance against the historical data. True, the historical data must be leveraged; however, if there is one insight to remember from the Vapnik-Chervonenkis theory is that all models are not born equal: some models carry a lot more structural risk - a concept part of the theory - than others. Entire class of models can be either considered as safe, or unsafe. from a pure theoretical perspective, which turns into very real accuracy improvements.

Overfitting issues cannot be avoided entirely, but they can be mitigated nonetheless.

There are several ways to mitigate overfitting. First, the one rule you should never break is: a forecasting model should never be assessed against the data that has been used to train the model in the first place. Many toolkits regress models on the entire history in order estimate the overall fit afterward. Well, as the name suggests, such a process gives you the fit but nothing more. In particular, the fit should not be interpreted as any kind of expected accuracy, it is not. The fit is typically much lower than the real accuracy.

Second, one simple way of mitigating ovefitting is to perform extensive back-testing. In practice, it means your process needs to split the input dataset over dozens - if not hundreds - of incremental date thresholds, and re-train all the forecasting models and re-assess them each time. Backtesting requires a lot of processing power. Being able to allocate the massive processing power it takes to perform extensive back-testing was actually one of the primary reasons why Lokad migrated toward cloud computing in the first place.

Third, even the most extensive back-testing is worth little if your time-series are sparse in the first place, that is, if time-series represent items of low sales volumes. Indeed, as most of the data points of the time-series are at zero, the back-testing process learns very little by iterating over zeroes. Unfortunately for commerce, about 90% of the items sold or serviced have a demand history that is considered as sparse from a statistical viewpoint. In order to address this problem, the performance of the model should be assessed with a multiple time-series viewpoint. It's not the performance of the model over a single time-series that matters, but its performance over well-defined clusters of time-series. Then, every becomes a balance between the local vs the global empirical accuracy when it comes to select the best model.

Any question? Don't hesitate to post them as comments.

Categories: Tags: overfitting, insights

Visual dashboard editor for Priceforge

Published on by Joannes Vermorel.

Our app Priceforge is a data visualization engine as well as a price optimizer. Through its syntax, Envision offers a programmatic way to define the tiles that appear in your dashboard.

While it's very handy to have an absolute control on the way numbers are computed, picking the colors of your tiles and composing the layout of your dashboard from the programming language itself is tedious. That's why Priceforge supports a visual editor for your dashboards.

Let's have a look at the sample dashboard provided by Priceforge:

Notice the Editor button at the top right of the window. If you click this button, it toggles the visual editor mode of Priceforge. All the tiles get dark and their respective positions - following Excel-like grid system used by Envision - appear on of each tile.

Then, while it's possible to move each tile invidually, the visual editor offers a simple way to move all tiles up or down. Just click one of those green crosses on the left, and a new blank line is inserted.

You can also remove this line blank line by clicking the red cross that now appears on the right.

If you click a tile in particular, the modal dialog box appears. This box is titled Edit Properties as illustrated below. From this box, you can control the title, the position and size, the unit and the color of the tile. In practice, it's the position and the color that are the most handy to be adjusted from the visual editor.

Once you're done with your changes, don't forget to save them by clicking the button Save tile properties that now appears at the top right of the window.

Categories: Tags: priceforge

Short-term vs long-term pricing analysis

Published on by Joannes Vermorel.

At first glance the notion of price elasticity of demand looks fairly reasonable, tractable even. Elasticity gives the percentage change in quantity demanded in response to a one percent change in price.

Putting aside a couple of odd situations where an increase of price creates an increase of demand, in the vast majority of situations, the elasticity is negative: demand decreases as price increases.

Most of the pricing literature indicates that price elasticity is a very desirable metric, because, through its analysis, it becomes possible to calculate the optimal price, that is, the price that maximizes the margin.

However, as far commerce is concerned, we observe that price elasticity is a misleading metric, even when measured correctly - which is also a tough challenge.

Indeed, for about any commerce, most elasticity analysis tend to show that prices can be increased and it won't impact much the demand. Worse, if a small scale A/B test is carried out, the test will confirm the analytical insight provided by the elasticity analysis.

And yet, the conclusion is plainly (deadly) incorrect.

Pricing is a signal sent to the market, and the market is made of habits. Only the most price-sensitive buyers do the effort of systematically checking the price of the competition. Most buyers do it only once in a while.

If your commerce were to increase all its prices by 20%, what would happen for the next 2 weeks? For most commerce, not much. Yet, within a couple of months, market shares would decline rather abruptly - unless the pricing shift is part of complete rebranding in order to reach richer segments.

In the short-term, demand tends to be fairly inelastic because habits dominate. In the long-term, it's the opposite: it's almost impossible to maintain a higher price than the competition if the package (product + service) is the same.

From a pricing perspective, it's important not to be fooled by short-sighted quantitative analysis. Price elasticity is relevant, but by construction, it is short-signed because it ignores that commerce is a repeated game where the goal is not to maximize the margin of the next client purchase, but rather optimize the market shares that offer the best sustainable margins.

Need a tool that gives you the insights it takes to craft solid prices? Don't hesitate to have a look at Priceforge, our pricing webapp.

Categories: Tags: pricing, insights

Ignore prices, only pricing strategies matter

Published on by Joannes Vermorel.

In order to achieve a better pricing in commerce, the whole initiative should start by realizing that prices themselves are irrelevant. Only the pricing strategy itself matters, that is, the logic that crunches all inputs such as the purchase prices and all the other relevant variables in order to produce the final price values.

When asked about the first step to get better prices, many retail practitioners answer: knowing the prices of your competitors. Rubbish. The first step consists of transitioning from implicit pricing strategies to explicit strategies, because only the latter are subject to measurable improvements.

Unless you’re quite familiar with the concept of pricing, this might sound very confusing.

The most difficult challenge of pricing is that you can’t replay the past. Once you’ve set a price, you will never know how many sales you would have got if you had put another price on display.

Oh yes, you can still change the price now and observe the sales for the next month, but are your sales growing because the price is going down or because your web traffic is going up or because your new product picture is more attractive? You will never know for sure. Actually, it’s not just you. Nobody and certainly not us at Lokad, will never ever know for sure.

Technically, we can argue that pricing is not eligible to backtesting.

Focusing on the prices themselves is a defective process in the sense that this process can’t be challenged. Prices can be changed, obviously, but, except for pathological situations where obvious pricing mistakes get corrected, your company won’t be able to decide if the new prices made the situation better or worse.

As the old saying goes, you can’t optimize what you can’t measure.

What can be challenged, however, is the pricing strategy. The pricing strategy is the logic, the set of rules, that processes the input data such as purchase costs, customer acquisition costs, inventory costs, prices of competitors ... and that produces the final public prices to be put on display.

Unlike raw prices, a pricing strategy can be challenged: given two pricing strategies, a strict experimentation protocol can be devised to decide which one of the two strategies is the most profitable one. Designing such a protocol is not a simple task, we will get back to this in a later post.

Intuitively, if you have 1000 items to be priced, you can assign the first 500 to the first strategy, and the last 500 to the second strategy. If the two pools of items are comparable, then it becomes possible to assess the performance of the respective strategies.

In the past, a few very large online merchants tried to display different prices to different customers just for the sake of gaining further market knowledge. In order to be fair, somehow, all customers were offered, at the end of the check-out, the lowest price. This approach stirred controversies, as far we know, it’s not used anymore, at least not at scale. Furthermore, in many countries, customer protection laws prevent retailers from tweaking their prices per customer.

Unfortunately, in most retail businesses, the pricing strategy does not exist anywhere but in the mind of the people in charge of setting the prices. Frequently, a myriad of spreadsheets also contains bits of pricing logic. However, as spreadsheets mix both data and logic, updating those spreadsheets with the latest data is error-prone and time-consuming.

With such a setup, pricing strategies remain implicit and unchallenged, and consequently the performance of the pricing remains stagnant. Worse, any market change that ends up reflected in the prices requires a lot of manpower just to re-enter the revised prices in the system somehow.

Thus, any pricing initiative in retail should start by transitioning toward an explicit pricing strategy that, given the proper data inputs, can be executed by a machine in order to produce the revised price values.

Some practitioners might argue that the machine is pretty dumb and that it will never know the market like they do. Well, this is absolutely true. Having a fully automated pricing logic just happens to be simplest way to make sure that the pricing logic is well-defined (non-ambiguous, conclusive, etc); however, this logic might be nothing more than the formal transcription of the pricing logic as understood by the practitioner herself. The machine is not expected to invent the pricing strategy, merely to execute it whenever refreshed prices are needed.

Priceforge, our pricing optimization webapp, has been designed precisely to let your company write its pricing strategies, because it’s the first step toward a situation where it becomes possible to actually improve the pricing.

Categories: Tags: pricing, priceforge, insights

We're hiring: Junior Account Manager

Published on by Joannes Vermorel.

Lokad is growing, and we need you. As a junior account manager, you will learn how to grow prospects into clients, and then, learn how to support them so that they make the most of our technology.

Mission

Your goal will be to establish and grow commercial relationships between Lokad and its clients. As Lokad is reaching a global market, you will be expected to converse by email in English with companies located pretty much anywhere on the globe. You will also be expected to handle phone calls, in English. Note that we don’t do much cold calling - so you won't spend your days being rejected after trying to call people who don’t know or don’t care about Lokad.

The position in our office Paris (13th arrondissement). This job is not eligible for remote work. Salary depends on the experience and subject to negotiation.

Profile

We are seeking an enthusiastic and communicative junior. Impeccable command of English is critical because most of our business happens outside France. For practical reasons, we also expect impeccable command of French.

We do not expect you to be knowledgeable about Big Data, but if you happen to be a bit savvy with the web, or software, or ecommerce, it will be appreciated.

Skills

  • Impeccable command of English and French.
  • Sharp analytical mind.

To apply, send us your resume to contact@lokad.com. In your message, please explain shortly why you would be a good fit for this job.

Categories: Tags: hiring

Control your service levels, don’t let them control you

Published on by Joannes Vermorel.

In retail, many companies don’t have much control over their service levels. In fact, many companies don’t monitor the service level where it matters the most: the physical store. Indeed, measuring in-store service level is a tedious exercise. Some companies - mostly panelists - specialize in doing this sort of measurements, but the cost is steep as there is no workaround for the extensive manpower involved in the process.

Taking a step back, why do we even need to measure the service level?

Wouldn’t it be more convenient if the service level was something obtained by design and defined through explicit settings within the inventory optimization software? This would certainly be a lot more practical. Service levels certainly don't need to be an afterthought of the inventory optimization process.

It turns out that, historically, the need to measure the service levels came from early inventory optimization methods such as the safety stock analysis that offer about no control on actual service levels. Indeed, the underlying models rely on the assumption that the demand is normally distributed and this assumption is so wrong in practice that most retailers gave up on this assumption in favor of ad-hoc safety stock coefficients.

Those ad-hoc safety stock coefficients are not bad per se: they are certainly better than relying on abusive assumptions about the future demand. However, the quantitative relationship between the safety stock and the service level is lost. Hence, retailers end-up measuring their service levels and tweaking coefficients until inventory stabilizes somehow. At the end, the situation is not satisfying because the inventory strategy is inflexible: safety stock coefficients can’t be changed without exposing the company to a myriad of problems, repeating the tedious empirical adjustments done originally.

However, with the advent of quantile forecasting, it’s now possible to produce forecasts that very accurately drive the service levels, even if the quantile forecasts themselves are not accurate. All it takes is unbiased forecasts, and not perfectly accurate forecasts.

Indeed, quantile forecasts directly and very natively address the problem of producing the reorder quantities it takes to cover target service levels. If a new and better quantile forecasting technology is found, then this technology might be capable of achieving the same service levels with less inventory, but both technologies deliver the service levels they promise by design.

This behavior is very unlike the case of classic forecasting allied to safety stock analysis where an improvement of the accuracy, while being desirable, leads to erratic results in practice. For example, for many low volume products, as observed in stores, shifting to a dumb forecasting model that always returns zero usually improves the accuracy defined as the absolute difference between the actual sales and the forecasted sales. Obviously, shifting toward zero forecasts for half of the products can only end with dismal business results. This example might appears as anecdotal but it is not. Zero forecasts are the most accurate classic forecasts in numerous situations.

Thus, in order to take control of your service levels, it takes an inventory optimization methodology where such a control is built-in.

Categories: Tags: quantiles, inventory, insights, inventory optimization

Prestashop add-on for BotDefender

Published on by Joannes Vermorel.

BotDefender logo A few weeks ago, we released an add-on for Magento, and now it's the turn of Prestashop to get protected by BotDefender against the automated retrieval of all the prices by competitors. Check out our Prestashop add-on for BotDefender (also available from the Xtendify store).

Protecting your store has now its 1-click solution!

This add-on has been co-designed as a joint effort between Xtendify, our Prestashop specialist, and the Lokad team. As we did for Magento, we paid the same attention to the add-on performance.

Meantime, while add-ons are being released, the BotDefender infrastructure has also been upgraded for better performance. Now BotDefender auto-diagnostics the need for SSL in order to revert to plain HTTP whenever it's applicable.

Categories: Tags: prestashop, botdefender, scraping