The Power of Operations Research in Manufacturing - Q&A with Axel Parmentier, Part 1

The power of operations research for manufacturing operations


Operations research tools are like a Swiss knife—they are extremely good at solving a very wide range of problems. Many well-known operations research techniques have been developed for airlines, such as the predictive maintenance algorithms that help airlines such as AirFrance know when to schedule aircraft maintenance at the precise moment. 


To understand how the manufacturing industry has been aided by Operations Research, as well as look ahead to the trends shaping the future, we spoke to Axel Parmentier, Researcher at the CERMICS, Ecole Nationale des Ponts et Chaussées. At Pelico we are very lucky to have Axel Parmentier as our scientific advisor. He has been helping us shape the Pelico solution. One of our Product Strategists, Ismail Zizi, sat down with Axel to discuss why now is a great time for supply chain and manufacturing organisations to start unlocking the value of operations research. 

Today, the applications of operations research have sprouted far beyond the airline industry. Take, for example, the widely used revenue management and yield management algorithms in the financial industry. Similarly, in manufacturing plants, the scheduling algorithms designed by operations researchers help operators adapt to changing circumstances and obtain new customized schedules in a matter of seconds.


What is operations research?

Operations research is the mathematical discipline that provides decision support tools. In an industrial context, the number of possible options is typically too large for a decision maker to consider each option individually. 


Operations research proposes solutions to find a good option, or even the best one. Route selection, network design and scheduling are among its natural applications. This essential tool to address resource allocation problems can be found in logistics, supply chains, network industries, infrastructure management, finance and computer architecture and, of course, manufacturing. More generally, the development of information systems and the massive amounts of resulting data have increased the potential applications of operations research, and there are many industrial sectors where the discipline is still under-exploited.


Originally coming into prominence during WWII to coordinate the deployment of Allied forces in Normandy, among other war activities, Operations Research has become an inextricable part of supply chain logistics and manufacturing today. Its use cases have evolved from creating routing networks in the 50s and 60s, to planning communications, energy and manufacturing networks in the decades that followed. 


In the early years, routing and scheduling (for manufacturing) were identified as challenging applications. If the last decades have seen major successes on routing, with huge scale supply chain problems routinely solved to optimality or near optimality, manufacturing applications remain more challenging.

 

From an application point of view, the game-changer is currently data. Manufacturing data is often scattered across dozens of databases, making it incredibly challenging to retrieve and make sense of. But new algorithms enable manufacturers to retrieve and centralize all this data.



Q&A — How to Optimize Manufacturing Processes with Operations Research



  1. 1. How do we make sure the manufacturing industry is mature enough to learn insights from Operations Research?

Operations Research algorithms solve computer science problems. A computer science problem can be understood as a “well-posed” question on some data. An algorithm is then a recipe that the computer is going to follow to answer that question. Algorithms are definitely not magic wands: if the question posed is not the right one, or if the data does not contain the answer, the decision support tool will be useless.


This gives us the prerequisites to successfully apply Operations Research in an industry like manufacturing. First, the process must be sufficiently well-structured to be able to identify tasks that could be (partially) automatized with operations research algorithms. Second, someone needs to spot the opportunity. To that purpose, there must be a dialogue between operations teams, who know which questions are worth answering, and operations research engineers, who know which problems can be solved. And third, the data must be available and well-structured. 


1.1 How do you ensure a successful collaboration between engineers and operational teams ?


First, let’s discuss the recipe for failure. Imagine good mathematicians who start an operations research project for an industrial partner. They spend five months fine-tuning the best algorithm they can think of, until finally the day has arrived to test it out. They show the solution to the practitioners, who tell them it is useless because it does not satisfy a certain constraint that is absolutely necessary—a constraint so trivial that operations did not even mention it to the mathematicians at the beginning. Since the constraint changes everything to the mathematical problem, those five months of work can be thrown out. 


Another failure could be that the data they used is not precise enough, so that the fine-tuned algorithm behaves badly when in production because it optimizes a biased criterion.


Now, what’s the recipe for success? In the beginning, you shouldn’t really care about the algorithm. What is really needed is to ensure that you have a good model. That is, that you answer the right question, and have relevant data at your disposal. You therefore need the mathematician and the operations person to work together from the very outset. One difficulty is that the operations person does not understand the mathematician’s equations. You therefore need to have a good user interface. The trick is to present as quickly as possible the solutions, so industry professionals can understand them and say, ‘Well, that's not exactly what we want’. 


In summary, the recipe for success is ensuring your solution corresponds to the daily needs of the industry.


1.2 And what about data availability ? 


Data availability is critical. Indeed, it can be difficult to evaluate the potential of an operations research opportunity without running some prototype algorithms on the data. If the data is available, this can be done in a few weeks. But if the data is of poor quality and spread among multiple databases, months of work on data are needed before being able to get a first idea. Most managers will not invest that much before they can even evaluate the potential of the opportunity.



  1. 2. Can you share any success stories of use cases where Operations Research helped companies optimize their manufacturing process?


I built an algorithm to optimize the backward logistics of a European carmaker. We spent a lot of time understanding the core needs of the carmaker's supply chain, asking questions such as: which decisions do we need to take every day? It turned out that, despite a powerful algorithm, their previous solution failed to deliver performance because their model was short-sighted. It considered only the decisions to be taken the next day, when it should have taken a longer perspective. Therefore, the answer was not in the data the algorithm was looking at. Our first move was to take a three week picture, instead of a day by day approach. We spent a lot of time challenging the constraints imposed by the operational teams, in order to identify which ones were really needed, and which ones were just there to help operational staff to find a solution when they searched for one with pen and pencil, and could be safely removed when computational power was brought in. Once we had a good model that focused on the right scope and had removed any useless constraints, we made an algorithm that was advanced but relatively simple, and the savings of inventory and routing costs were somewhere between 5% and 15%, which is very significant.


We computer scientists spend most of our time fine-tuning complex algorithms for well-posed problems—that is, refining strategies to play better by the rules of the game. Years of interaction with the industry taught me that there was generally more money to be made by changing the rules of the game.



  1. 3. What about an example where your approach failed?


Right after my thesis, I started a project with an airline I have been working with ever since. Applying the recipe previously mentioned, I made sure that I used the right model and the right data. Since the project was closely related to the topic of my PhD, I designed a state-of-the-art algorithm to lead to the minimization of costs based on the analysis of flight leg combinations. The failure was spectacular: one year later, there were zero results in terms of production. The team that implemented and maintained the tool did not master the technology behind my algorithm. Therefore, despite the fact that I initially delivered a working prototype, the team wasn’t able to adapt, debug, and maintain it.


I decided to simplify the algorithm, and we immediately had positive results. This simpler version is still in production today and is performing perfectly. The lesson I retained from that project is that the algorithm of choice is not the state-of-the-art academic one, but the best one that is most adapted/suited to the technological know-how of the team that will implement and maintain it.



  1. 4. How do you explain the operational success of Amazon and new digital native companies such as Gorillas, vs. traditional companies? Is there a difference in approach?


As we already mentioned, you need mature processes and well-structured data to successfully apply operations research. The ideal partners for an operations research specialist like me are companies which are able to scale along with their operations systems. Since the information system is well-structured and the processes are organized within the information system, this guarantees easy access to real-time data to make your decisions, and to the IT infrastructure to leverage these decisions.


As for the difference between established companies and digital natives, the key is in the integration of the IT infrastructure with their operational processes. It renders the data required by the algorithms easily accessible. And the technology to automatically apply the decisions proposed by the algorithms is already available. All this reduces the side-efforts required to apply operations research algorithms, which enables a much faster deployment. With their processes, these new companies have built the metaphorical IT freeway on top of which cars (the algorithms) can go super fast.


  1. 5. How does the hybrid approach work, where established companies want to benefit from new technologies?


For established companies whose information systems have been built on top of existing processes, the integration of algorithms is much less natural. Data is likely to be scattered across several databases that have been conceived in silos. And they are generally not organized in a way that makes them easily usable to optimize the process. This is where firms like Pelico unlock the value that is hidden in these databases. With their ontologies, they make the link between the databases and enable them to leverage them to improve the process. In other words, they build the freeway needed by the algorithms.


Once you have made this link between the databases, you’ve already taken a huge step. Indeed, any decent car goes faster on a freeway than a supercar on a country road. Similarly, simple algorithms can unlock a lot of value once ontologies make information easily accessible. And for specialists of Operations Research, it is the dream situation: everything is there to plug algorithms and deliver maximum performance.


Of course, even if everything is ready for the algorithm, you still need someone who is able to identify the opportunities—that is, someone who understands sufficiently well the industrial processes and the algorithms. If the company does not have such talent in-house, they need to hire experts to help them create some sort of data diagnostic. You look at the data, and you try to figure out if the key to the solution is hidden in there somewhere. The important thing is that the experts must be truly committed to understanding the company’s processes, and not just apply standard algorithms that aren’t tailored to the specific processes.


  1. 6. Do the end users need to understand the algorithms?


One of the benefits of Operations Research is that you don’t need to understand the mathematics to see if the suggested solution works. When you look at machine schedules, you do not need to understand how they have been built—what you care about is that they function well. As a consequence, the most important part is probably the user interface. Even if the operator doesn't understand how the algorithm works, if they have a nice interface showing the output of the algorithm and the schedule for the different tasks for each machine, they can interact with the algorithm and understand the outcome. They do not need intricate knowledge of how the algorithm works to do their job. 


The situation is different when you need to make impactful decisions based on a statistical model. Let us recall that machine learning is mainly focused on prediction problems. A typical application is predictive maintenance. Typically, the output of an algorithm will be “yes, you need to repair the landing gear of this aircraft”, or “no, you do not”. Given the amount of work it represents to unmount the landing gear of a long haul aircraft, decision makers typically do not want to trust a black-box for this kind of solution. You therefore have to use open-box algorithms, whose decisions can be understood by operational teams. I have had the chance to work on the predictive maintenance tools for Air France, which have been and continue to be a huge success. The open-box aspect of the tool was critical for its adoption by operational teams.



  1. 7. How can you make sure humans are in the loop and feeding the whole system, where they can add value to the data and refine the system when needed?


As I previously mentioned, having a good user interface and making sure that the tool answers to the operations team’s needs is the best way of ensuring that it will be used and fed with the right data. Then there is the question of how you update the decisions based on this data. 


This becomes more tricky when the environment is uncertain—and uncertainty is a defining feature of manufacturing. Operational teams spend their time firefighting because of uncertainty. If you want to improve the situation with mathematics, the questions you want to answer are of the form: given all the information I have, what is the probability that this event happens, and which decisions should I take as a consequence? We enter the field of Bayesian statistics (for the predictions) and of stochastic optimization (for the decisions). Both of these approaches are based on probabilistic models. And if there is one thing to know about probabilities, it is that we’re not very good at them. Humans typically have bad intuitions about probabilities. 


You now see the difficulty: you want to keep the human in the loop and use probabilistic models, which are among the least intuitive parts of mathematics. The answer of the statistician for this situation is, you should use Bayesian networks. Very broadly, a Bayesian network will be graphs—i.e, nodes and arrows between the nodes—that describe causality. Typically, we use this kind of model because an operational expert is supposed to understand the variables corresponding to the different nodes, and the arrows which give their relations. You do not need to know statistics to understand that if a task is scheduled after another on a machine, the delay on the second one is going to be impacted by the first, and not the other way around. Operations experts are therefore supposed to work hand in hand with mathematicians to build a Bayesian network that encodes the operations expert’s understanding of the situation. Once it is done, the statistician has just to feed the model with data and infer probabilities about the future.


  1. 8. Do you have examples of people using dynamic statistical inference in their operations, at scale?


Well, as I said, the previous description of Bayesian networks is the theory of how things should work according to the mathematicians. And yes, for relatively simple processes, these approaches are applied with success, notably in the context of predictive maintenance. But they are definitely not used routinely at the huge scale of the production plant. In that case, people generally stick to an older and simpler technology: discrete event systems. These systems enable you to simulate how the plant will work given the input data, and are used with success in many industries. They have the advantage of being simpler and more flexible than statistical inference, and produce results faster. Recently, probabilistic programming, which is supposed to combine the best of both worlds, has started to emerge. I am not an expert in that field, but I tend to think that it is still not mature enough to be applied in manufacturing.


  1. 9. What is the next big thing in operations research—the frontier?


I believe that Operations Research tools will become much more flexible thanks to machine learning. One difficulty in Operations Research is that you have to manually tailor your algorithm for the very specific problem you are working on. This really slows down the adoption of our technologies. First, it means huge engineering development on practical applications. Since Operations Research is not a new field, you would typically think that there are off-the-shelf solutions that can be applied on most industrial problems (and there are, the best example being mixed integer linear programming solvers). The reality is that operations research academics have spent 70 years building Formula One algorithms for their academic problems. But the issue is that real life problems often look like academic problems, but with slight differences that make the off-the-shelf algorithms impossible to apply. You therefore have to redevelop an algorithm for your variant from scratch. And second, it means huge maintenance costs, because when a new constraint comes, you may have to rebuild the algorithm completely. 


Emerging techniques at the frontier of machine learning and operations research start to solve this “variant of the problem known” issue. Typically, a statistical (machine learning) model is used to turn the specific industrial variant into an instance of the initial problem for which good algorithms exist. And a machine learning algorithm is used to calibrate the model so that it solves the variant well based on data. 


This may sound very abstract, but let us put it simply: today, when there is a small change in your process, you need to hire an engineering team to model this small change, to develop it, and to maintain it. Tomorrow, with these new techniques, all you will have to do is to launch an algorithm which will learn by itself (only computing time) how to adapt the parameters of the software to the new situation. The cost of deploying operations research algorithms will fall, leading to a boom of operations research applications in the industry.


Redaction:
Ismail Zizi, Product Strategist
Illustration:
Gülşah Keleş
Stay up to date with our news! Receive our articles about the future of the factory and Pelico's new features in your inbox.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Pelico’s supply chain operations management platform empowers factory teams with the agility needed to overcome production blockers and deliver products on time and within budget.
LinkedIn logo (social media)