AI in manufacturing – revolutionary opportunity or well-trodden path?

Dec 7, 2019 | IT in Manufacturing

Artificial Intelligence (AI) has become a catchphrase used by marketers that attributes the characteristics of human intelligence to a computer system. AI incorporates concepts like machine learning and pattern recognition. It is impressive that facial recognition can help you identify one of your friends in an old school photograph, or see what you will look like in 10 years’ time. Fun aside, there is, of course, the enormous economic potential to use AI in industry. Improved quality control, better equipment design, improved efficiencies, streamlined supply chains, predictive maintenance, safer plants, and intelligent/safe collaborative robots are all benefits of AI correctly applied. But the popular assumptions around AI should also be challenged in order to gain an understanding of some of the limitations of the concept.

Is AI really that new?

In the chemicals industry, I have seen software used for decades to design plants and optimise production processes. Chemical engineers who designed their first heat exchanger using a slide rule will appreciate the time saved by using finite element analysis on a computer. I also remember first putting together heat and mass balances in 1986 using a software program called SPEEDUP which was used for steady-state and dynamic modelling of process streams. The ability of software to calculate multiple scenarios in a fraction of the time meant that process and equipment design became quicker and far more efficient. These systems certainly improved (augmented) an engineer’s capabilities to design complex process plants, but they were really CAD (computer-aided design) and not real AI.

If real AI requires an element of self-learning, do any well-proven examples already exist in manufacturing? The answer is yes. In a running oil refinery, neural networks have been used for decades to take a set of input variables (such as trends of temperature, pressure and composition) to predict the output of a complex system like a distillation column. These networks are initially ‘trained’ with data and then set up to ‘learn’ so that the predictions became better over time. Once the neural network can reliably predict performance, it can be used to simulate the future. These techniques are proven, but also limited in that they operate in closed, well-defined systems.

AI is an evolving technology that utilises advanced techniques to self-learn. Real AI is not just CAD or simulation; it seeks to augment human problem solving and judgement where the inputs are uncertain and when it is not possible to reliably determine the best course of action. In most industrial applications, AI is most likely to be used to enhance human decision making, and not simply replace or codify it.

Distinguishing AI from automation

In order to try and better understand the future manufacturing plant, it is also important to distinguish AI from automation. Automation repeatedly produces the desired result without any human intervention: provided that the inputs fall in a defined range, the output parameters can be reliably determined and executed.

An automation application can be engineered to operate in a defined system for years. On the other hand, an AI application is continuously evolving and relies on ongoing human interaction in order to learn, accommodate change, make better recommendations and come up with better outcomes. In the beginning, it can be quite basic (remember Clippy, the Microsoft paperclip aide for Office). Later it evolves (think Cortana or Apple’s Siri). As the technology evolves, our trust and reliance on it also increases.

This distinction from automation impacts on the way we should approach an AI project in business. You can apply proven techniques to design and embed automation into a plant so that it runs without any further effort. However, when embarking on an AI project, you need to be prepared for an ongoing process that will iterate and evolve over time. You also need to consider and provide for the human/machine interaction both now and in future scenarios where the next generation workforce is on-board.

Unpredictable human behaviour is the biggest challenge

To illustrate the unpredictability of human behaviour, back in 1988 I worked on a project to try and optimise the production across a fairly complex factory. The site comprised of over a dozen continuous processing plants all interlinked and dependent on each other for raw materials. There were several constraints (like steam supply and rail networks) that prevented certain combinations of plants running at full capacity. If any plant shut down and buffers ran empty the ripple effects were very costly. We used computer simulation to gather historical patterns of raw material supply and production to determine the optimum levels of inventory and production rates of the various plants. The optimum ensured that no plant was ever starved of raw materials or energy. As new raw production data became available the model was updated and improved. I remember presenting the results of the initial study to a room full of production managers, business representatives and engineers. Everyone seemed to agree that the technique made sense and committed to it for production planning.

However, a few months later very little had come of the initiative. Not because the simulation was inaccurate, but because the software could never truly account for individual behaviours (the human element). Production managers seem to be conditioned always to run their plants a little harder to get ahead of their targets and collect their production bonuses. Why would a production manager deliberately throttle back his plant just because of a computer simulation run by a junior engineer? It took just one strong-willed maverick to fall out of line and the whole system became unstable again. This led to the credibility of the simulation model being questioned and before long everyone had resorted back to their old chaotic habits. The problem had moved from the realm of engineering to become an HR issue.

I had a similar experience decades later when we modelled the supply chain for an FMG manufacturer. Most production managers ignored the results of the simulation in favour of “gut feel and experience” because they did not understand the complex logic and therefore could not ever get comfortable with the simulated results.

Conclusion

The point of these examples is to highlight how important it is to consider the human response when building an AI system. AI does not simply better automate routine tasks; it should also augment human decision making. A computer is very good at rapidly doing repetitive calculations (such as aiding equipment design or doing production simulations). People are much better at judgement calls requiring intuition and uncertainty and building relationships. Manufacturing AI will in the short term find traction in restricted areas like quality control/inspections, CAD, condition monitoring, augmented reality, etc. But there is a very long road ahead before we have autonomous production plants capable of reconfiguring themselves to meet new production requirements.

Is AI in manufacturing truly the revolution punted by futurists and marketing people? Yes and no. Arguably AI has come a long way already and from an engineering and industry perspective, we are in for continued evolution of what has been done before. However, the human and social dimensions around AI are still poorly understood and will, I believe, become the real challenge. This factor will impact on manufacturing in many unpredictable and disruptive ways. The next few years are certainly going to be an exciting and at times uncomfortable ride!

This article was first published on SA Instrumentation and Control.

Currently trending