Filtering by Tag: software

Q&A about inventory optimization software

Published on by Joannes Vermorel.

Under the supervision of Prof. Dr. Stefan Minner, Leander Zimmermann and Patrick Menzel are writing a thesis at the Technical University of Munich. The goal of this study is to compare inventory optimization software. Lokad did receive their questionnaire, and with the permission of the authors, we are publishing here both their questions and our answers.

1. When did you introduce your optimization software to the market?

Lokad was launched in 2008, but as a pure demand forecasting solution at the time. We started to do end-to-end supply chain optimization in 2012.

2. For which company sizes is your software suitable?

We have clients ranging from 1-man companies to companies over 100,000 employees. However, below 500k€ worth of inventory, the statistical optimization of the supply chain is frequently not worth the effort.

3. For a midsized company of around 50-250 employees and for sales of around 10-25 million euros per year. What would be the price of your standard software package?

This would be our Premier package at $2500 / month. However, the package covers a lot more than just software. Pure software is only 1/5th of our fees or so.

The bulk of the fee goes into paying a data scientist at Lokad who manage the account, leveraging our technology stack to get the final results. That's what we call an inventory optimization as a service.

4. Is your software suitable for different industries? (e.g. pharmacy, metal, perishable goods, …)

Yes, we support diverse verticals from aerospace to fashion with fresh food in the middle. However, our software is primarily a programmatic toolkit tailored for quantitative supply chain optimization. While we do address many verticals, it usually takes a data scientist to craft the finalized solution.

5. What characteristics of your software differentiate you from other optimization software? (Unique selling proposition)

Classic forecasts, and by extension the classic inventory optimization theory, work poorly, surprisingly poorly even. It took Lokad years to realize that the main challenge - statistically speaking - was related to the extreme cases and that is what costs money in reality. Lokad delivers probabilistic forecasts. Whenever inventory is involved, probabilistic forecasts are just better than the classic ones.

6. For which computer platforms is your software applicable? (e.g. Microsoft, Apple, Linux, …)

Lokad is a SaaS (webapp) built on top of a cloud computing platform (Microsoft Azure). Our clients are very diverse. However, in supply chain, there are still more IBM Mainframes out there than OSX setups.

However, without a cloud computing platform, it would be very impractical to run the machine learning algorithms that Lokad routinely leverages. Thus, our software is not designed to run on premise.

7. Does your company provide standardized or personalized software solutions?

Tricky question and subtle answer.

Lokad delivers a packaged platform. We are multi-tenant: all our clients run on the same app. In this respect, we are heavily standardized.

Yet, Lokad delivers a domain-specific language called Envision. Through this language, it's possible to tailor bespoke solutions. In practice, most of our clients benefit from fully personalized solutions.

Lokad has crafted a technology intended to deliver personalized supply chain solutions at a fraction of the costs usually involved with such solutions by boosting the expert's productivity.

8. If it is a standardized software, which features are included in the standard package of your software?

We have over 100 pages worth of documentations. For the sake of concision, they won't be listed there.

9. Are there add-ons available? If yes, which? (e.g. spare parts, …)

We don’t have add-ons in the sense that every single plan - even our free plan – include all features without restriction.

10. For which stages/levels can your software optimize inventory management? (e.g. factory, warehouse, supplier, …)

We cover pretty much all supply chain stages - warehouses, point of sales, workshops – both for forward and reverse logistics.

11. Is your software solving the problems optimally or heuristically?

Computer Science tells you that nearly every non-trivial numerical optimization problem can only be resolved approximately. Even something as basic as bin packing is already NP-complete, and bin packing is far from being a complex supply chain problem.

Many vendors - maybe even Lokad (I try hard to resist to marketing superlatives) - may claim to have an "optimal" solution, but, at best, this should be considered Dolus Bonus; aka an acceptable lie, akin to TV ads boasting unforgetable experience or similar semi-ridiculous claims.

I advise to check my earlier post about top 10 lies of forecasting vendors. Any vendor who would seriously claim to deliver an "optimal" solution - in the mathematical sense - would either be lying or delusional.

12. Which algorithms is your software using? (e.g. Silver-Meal, Wagner-Within, ...)

Both Silver-Meal and Wagner-Within come from the classic perspective where future demand cannot be expressed as arbitrary non-parametric distributions of probabilities. In our book, those algorithms fail at delivering satisfying answers whenever uncertainty is present.

Lokad is using over 100 distinct algorithms, most of them having no known name in the scientific literature. Specialization is king. Most of those algorithms are only new/better in the sense that they provide a superior solution to a very narrow class of problems - as opposed to generic numeric solvers.

13. Where are the limits in terms of input quantities which can be calculated at once? (e.g. size of cargo, different products, period of time, …)

The numerical limits of our technology are typically ridiculously high compared to the actual size of the supply chain challenges. Ex: no more than 2^32 SKUs can be processed at once. Through cloud computing, we can tap nearly unbounded computing resources.

That being said, unbounded computing resources also imply unbounded computing costs. Thus, while we don’t have hard limits on data inputs or outputs, we pay attention to keep those computing costs under control, adjusting the amount of computing resources to the scale of the business challenge to be addressed.

14. How many variables can be chosen and how many are given? (e.g. degree of service, period of time, Lot size, ...)

Lokad is designed around “Envision” a domain-specific programming language dedicated to supply chain optimization. This language offers programmatic capabilities, hence again hard limits are so high they are irrelevant in practice. Our language would not support more than 2^31 variables for example.

However, dealing with more than 100 heterogenous variables at once would already be an insanely costly undertaking from a practical perspective: each variable needs to be qualified, fed with proper data, properly adjusted to fit into the bigger model, etc.

15. Does your inventory management support multiple supply chains for one stock?

Yes. There might be multiple sources AND multiple consumers for a given stock. Inventory can be serial too: each unit of stock may have some unique properties influencing the rest of the chain. This situation is commonly found in aerospace for example.

16. If yes, can those supply chains be prioritized/classified? (e.g. ABC/XYZ products)

Yes. However, prioritization is usually more expressive than classification. We strongly discourage our clients from using ABC analysis, because a lot of valuable information gets lost through such a crude classification.

17. Which method of demand forecasting is implemented? (e.g. moving average, exponential smoothing, Winter’s Method, …)

Moving average, exponential smoothing, Holt and/or Winter’s methods, all those methods produce classic forecasts – aka average or median forecasts. Those forecasts invariably work poorly for inventory optimization because they can’t capture a truly stochastic vision of the future. Plus, as a separate concern, they can’t correlate demand patterns between SKUs either.

Being the counterpart of constrained optimization (detailed above), Lokad has also over 100 algorithms in the field of statistical forecasting. Most of those algorithms have no well-known name in the literature either. Yet, again, specialization is king.

18. How many past periods are considered to calculate the future demand?

The idea that past demand should be represented as periods is mostly wrong. The granularity of the demand is important: 10 clients ordering 1 unit each is not the same thing than 1 client ordering 10 units at once. Our algorithms are typically not based on periods.

Then, in terms of depth of the history, our algorithms typically try to leverage all the history available. In practice, it’s rare that looking further than 10 years back yield any gain in the future forecasts. So there is no hard limit, it’s just that the past fades into numerical irrelevance.

19. Is the seasonal change in demand included in the forecast? (yes/no)

Yes. However, seasonality is only one of the cyclicities that exist in the demand: day of week and day of the month are also important, and also handled. Then, we have also made recent progress on quasi-seasonality: patterns that don’t exactly fit the Gregorian calendar such as Easter, Chinese New Year, Ramadan, Mother’s day, etc.

20. What kind of performance measures can be analyzed? (e.g. waiting time, ready rate, non-stockout probability, degree of service, …)

As long as you can write a program to express your metric, it should be feasible with Lokad. Yet again, Lokad offers a domain-specific programming language, so we are flexible by design. In the end, there is one metric to rule them all: the dollars of error.

21. Does your software support the implementation of penalty costs? (e.g. cost for “out of stock”, “capacity limits reached”, …)

Yes, it's one special case of the many business drivers that we take into account. Those penalties can take many numerical shapes: linear or not, deterministic or not, etc.

22. Which are your three strongest competitors in your market segment?

Excel, Excel and Excel. Number 4 is pen+paper+guesswork.

23. Do you have a list of companies (mid-size to large-size) using your software?

See our customer's page.

Categories: Tags: insights software No Comments

Forecast's species: classification vs. regression

Published on by Joannes Vermorel.

The word forecasting is covering a very large spectrum of processes, technologies and even markets. In the past, we introduced the worlds of forecasting software, distinguishing between:

  • Deterministic simulation software
  • Expert aggregation software
  • Statistical forecasting software

Lokad falls in the last category as our technology is purely statistical. Yet, Lokad is far from covering the entire statistical spectrum on is own. Two broads categories of forecasts exist in statistical forecasting (*):

  • Classification forecasts
  • Regression forecasts

(*) We are oversimplifying here for the sake of clarity, as statistical learning subtleties are well beyond the scope of this modest blog post.

Classification attempts to separate (or classify) objects according to their properties. The illustration below from Tomasz Malisiewicz illustrates a classification task trying to separate images picturing a chair from images picturing a table.

Illustration from tombone's blog

The output of a classification is binary (or rather discrete): objects get assigned to classes with more or less confidence, i.e. higher or lower probabilities.

On the other hand, regressions typically output curves. The illustration below is considering a time-series representing historical sales, and displays the corresponding forecast.

 The regression forecast is a curve rather than a binary (or combination of binary) settings. Inputs get prolonged into the future.

How does this distinction impact the business?

Well, it turns out that Lokad - as it stands early 2010 - only delivers regression forecasts. Thus, there are many interesting problems that cannot be tackled by Lokad because these are classification problems:

  • Customer segmentation: for each customer, we would like to evaluate the probability of achieving successful up-sale through a direct marketing action. Following the same idea, we could try to predict the churn as well.
  • Fraud detection: for each transaction, we would like to evaluate - based on the transaction pattern - the probability for the operation to be a fraud attempt.
  • Deal prioritization: based on the properties of the prospect (availability of budget, industry, contact rank in the company, expressed level of interest, ...), we would like to evaluate the likelihood to get a profitable deal out of each prospect to prioritize the sales team efforts.

Frequently, we are asked whether Lokad could deliver classification forecasts as well. Unfortunately, the answer will be negative for the time being. Albeit being rooted by the same mathematical theory, classification and regression entail very different technologies; and Lokad is pushing all its efforts toward regression problems.

Although, we are not dismissive about classification problems, they truly deserve attention and efforts. For 2010, we are sticking to our roadmap, but further ahead, classification could be a natural extension of our forecasting services.

Categories: business, forecasting, insights, market Tags: classification forecasting insights regression software No Comments

Machine learning company, what’s so special?

Published on by Joannes Vermorel.

Machine learning is the subfield of artificial intelligence that is concerned with the design and development of algorithms that allow computers to improve their performance over time based on data, such as from sensor data or databases. Wikipedia.

Ten years ago, machine learning companies were virtually non-existent, or say, marginal at most. The main reason for that situation was simply that there weren’t that many algorithms actually working and delivering business value at the time. Automated translation, for example, is still barely working, and very far from being usable in most businesses.

Lokad fits into the broad machine learning field, with a specific interest for statistical learning. Personally, I have been working in the machine learning field for now almost a decade, and it’s still surprising to see how things are deeply different is this field compared to the typical shrinkwrap software world. Machine learning is a software world of its own.

Scientific progress on areas that looked like artificial intelligence has been slow, very slow compared to most other software areas. But a fact that is too little known is also that scientific progress has been steady; and, at present day, there are quite a few successful machine learning companies around:

  • Smart spam filter: damn, akismet caught more than 71 000 blog spam comments on my blog, with virtually zero false positive as far I can tell.
  • Voice recognition: Dragon Dictate is now doing quite an impressive job just after a few minutes of user tuning.
  • Handwriting recognition and even equation recognition are built in Windows 7.

Machine learning has become mainstream.

1. Product changes but user interface stays


For most software businesses, bringing something new to the customer eyes is THE way to get recurrent revenues. SaaS is slowly changing this financial aspect, but still, for most SaaS products, evolution comes with very tangible changes on the user interface.

On the contrary, in machine learning, development usually doesn’t mean adding any new feature. Most of the evolution happens deep inside with very little or no surfacing changes. Google Search - probably the most successful of all machine learning products - is notoriously simple, and has been that way for a decade now. Lately, ranking customization based on user preferences has been added, but this change occurred almost 10 years after the launch, and I would guess, is still unnoticed by most users.

Yet, it doesn't mean that Google folks have been staying idle for the last 10 years. Quite the opposite actually, Google teams have been furiously improving their technology winning battle after battle against web spammers who are now using very clever tricks.

2. Ten orders of magnitude in performance


When, it comes to software performance, usual shrinkwrap operations happen within 100ms. For example, I suspect that usual computation times, server side, needed to generate a web page application are ranging from 5ms for the most optimized apps to 500ms for the slowest ones. Be slower than that, and your users will give up on visiting your website. Although, it’s hardly verifiable, I would suspect this performance range holds true for 99% of the web applications.

But it comes to machine learning, typical computational costs are varying for more than 10 orders of magnitude, from milliseconds to weeks.

At present day, the price of 1 month of CPU at 2Ghz has dropped to $10, and I expect this price drop under $1 in the next 5 years. Also, one month of CPU can be compressed within a few hours of wall time through large scale parallelization. For most machine learning algorithms, accuracy can be improved by dedicating more CPU to the task at hand.

Thus, gaining 1% in accuracy with a 1 month CPU investment ($10) can be massively profitable, but that sort of reasoning is just plain insanity for most, if not all, software areas outside machine learning.

3. Hard core scalability challenges


Scaling-up a Web2.0 such as say Twitter is a challenge indeed, but, in the end, 90% of the solution lies into a single technique: in-memory caching of the most frequently viewed items.

On the contrary, scaling up machine learning algorithms is usually a terrifyingly complicated task. It took Google several years to manage to perform large scale sparse matrix diagonalization; and linear algebra is clearly not the most challenging area of mathematics when it comes to machine learning problems.

The core problem of machine learning is that the most efficient way to improve your accuracy consists in adding more input data. For example, if you want to improve the accuracy of your spam filter, you can try to improve your algorithm, but you can also use a larger input database where emails are already flagged as spam or not spam. Actually, as long as you have enough processing power, it’s frequently way easier to improve your accuracy through larger input data than through smarter algorithms.

Yet, handling large amount of data in machine learning is a complicated problem because you can’t naively partition your data. Naïve partitioning is equivalent of discarding input data and of performing local computations that are not leveraging all the data available. Bottom line: machine learning needs very clever ways of distributing its algorithms.

4. User feedback is usually plain wrong


Smart people advise to do hallway usability testing. This also apply to whatever user interface you put on your machine learning product, but when it comes to improve the core of your technology, user feedback is virtually useless when not simply harmful if actually implemented.

The main issue is that, in machine learning, most good / correct / expected behaviors are unfortunately counter intuitive. For example, at Lokad, a frequent customer’s complain is that we deliver flat forecasts which are perceived as incorrect. Yet, those flat forecasts are just in the best interest of those customers, because they happen to be more accurate.

Although being knowledgeable about spam filtering, I am pretty sure that 99% of the suggestions that I come up with and send to the akismet folks would be just junk to them, simply because the challenge in spam filtering is not how do I filter spam, but how do I filter spam, without filtering legit emails. And yes, the folks at Pfizer have the right to discuss by email of Sildenafil citrate compounds without having all their emails filtered.

5. But user data holds the truth


Mock data and scenarios mostly make no sense in machine learning. Real data happens to be surprising in many unexpected ways. Working in this field for 10 years now, and each new dataset that I have ever investigated has been surprising in many ways. It’s completely useless to work on your own made-up data. Without real customer data at hand, you can’t do anything in machine learning.

This particular aspect frequently leads to chicken-egg problem in machine learning: if you want to start optimizing contextual ads display, you need loads of advertisers and publishers. Yet, without loads of advertisers and publishers, you can’t refine your technology and consequently, you can’t convince loads of advertisers and publishers to join.

6. Tuning vs. Mathematics, Evolution vs. Revolution


Smart people advise that rewriting from scratch is the type of strategic mistake that frequently kills software companies. Yet, in machine learning, rewriting from scratch is frequently the only way to save your company.

Somewhere at the end of nineties, Altavista, the leading search engine, did not took the time to rewrite their ranking technology based on the crazy mathematical ideas based on large scale diagonalization. As a result, they got overwhelmed by a small company lead by a bunch inexperienced people.

Tuning and incremental improvement is the heart of classical software engineering, and it’s also hold true for machine learning - most of the time. Gaining the next percent of accuracy is frequently achieved by finely tuning and refining an existing algorithm, designing tons of ad-hoc reporting mechanisms in the process to get deeper insights in the algorithm behavior.

Yet, each new percent of accuracy that way costs you tenfold as much of efforts than the previous one; and after a couple of months or years your technology is just stuck in a dead-end.

That’s where hard core mathematics come into play. Mathematics is critical to jump on the next stage of performance, the kind of jump were you make a 10% improvement which seemed not even possible with the previous approach. Then, trying new theories is like playing roulette: most of the time, you lose, and the new theory is not bringing any additional improvements.

In the end, making progress in machine learning means very frequently trying approaches that are doomed to fail with a high probability. But once in a while something actually happens to work and the technology leaps forward.

Categories: developers, insights, technical Tags: business insight machine learning software technology 1 Comment