Asset Management: Evolution to Revolution
Klaus M. Blache | April 25, 2019
Q: Asset management is changing rapidly. Where is it all headed?
A: Maintenance and reliability concepts have been evolving for a long time with increasing complexity and capabilities. This is further complicated by global competition, the high number of trades/technicians leaving the workforce, changing business dynamics, and the ongoing need to improve culture.
The “fix it when it’s broken mentality” was the norm prior to 1950. Unfortunately, there are still too many facilities operating with that philosophy. In the mid-60s, we saw some early computer usage, mainly for task reminders and basic work orders. In the mid-70s there were still Fortran punch cards being used on large IBM computers. As computers got smaller and capabilities increased, computerized maintenance management systems (CMMS) got better and usage accelerated.
In the mid-to-late-90s, PCs quickly took over. Then came wireless/mobility, cloud-based computing, and the rise of predictive technologies and condition-based maintenance. Data from these new tools and techniques provided analytics that led to predictive maintenance. Beginning in the late-70s, there was increasing focus on reliability-centered maintenance, failure modes and effects analysis, and root-cause analysis. Later came design for reliability and maintainability (R&M) and PM (preventive maintenance) optimization.
For several decades, there were good apprentice programs for trades. Very few companies still have them and not enough young people are moving in that direction. Also, not enough plant-floor people have the discipline, precision maintenance, and reliability skills needed to optimize operational performance. This has emerged as a strong need in the past 10 years.
It took more than 60 years to get from the Maytag Man (a bored repairman depicting product dependability of the Maytag brand) and the maintenance attitude of the period to current practices. Then came the IoT (Internet of Things), big data, wireless, mobility, and better access to real-time data.
John Moubray had a much earlier version of a chart similar to the version shown below (Reliability-Centered Maintenance, 2nd edition, John Moubray, Industrial Press, NY, 1992). When John and I first met in Aberdeen, Scotland, in the 80s, many of the items on the chart (beginning about 2010) weren’t even on our radar screen. Reliability and maintainability were getting more attention, but much more focus was on lean manufacturing, just-in-time processes, and Six-Sigma.
The “Evolution of R&M Tools & Technologies” chart above is not meant to be all-inclusive, but a representation of the many different tools and techniques that are involved and how quickly they’re being introduced. It took more than 60 years to go from reactive to time-based to predictive maintenance, and too many are still trying to figure it out. For everyone, there is still opportunity.
Look at the pace of change and some of the new items since just 2010. There is a renewed interest in precision maintenance, cloud-based data storage and analysis, IoT, better real-time data, edge computing, and artificial intelligence. Edge computing is computing that’s done at/near the source of the data. The lower cost of sensors and computing is putting computational capability where it’s needed most. It uses analytics to make maintenance predictions and puts in place needed action items, i.e., contact a mobile asset for faster repair.
Artificial intelligence (AI) has come a long way, but is still in its infancy. AI perceives information, stores it (knowledge), and applies it in new situations. More will be happening with prescriptive maintenance (AI, algorithms). One ultimate goal is to emulate biological neural networks. I also placed purchasing for R&M (design-in and life-cycle cost basis) on the future list, hoping that the machine learning/AI systems will build it into their decision making.
Quantum computing has the potential to increase computing capability on a scale of billions. It applies the laws of quantum physics, enabling it to simultaneously perform computations in multiple states. The traditional laws of physics will no longer apply, allowing it to store and use vast amounts of data.
Along the pathway of changes in R&M from the 1940s to today, when the paradigms were changing, there were periods of uncertainty. Why should we change? Benefits and payback of the new tools and technologies were typically oversold and early users were often discouraged. As the applicability improved, so did the acceptance and return on investment.
A recent study, “What Separates Leaders from Laggards in the Internet of Things,” January 2019, McKinsey & Co., New York (mckinsey.com), found that:
• less than 30% moved their IoT programs beyond the pilot phase
• only a sixth of companies experienced a positive impact on cost and revenues
• improving financial impact required numerous implementations with a steep learning curve (first 15 use cases had only a modest impact).
So, it’s not surprising that many facilities that struggle to just get weekly production out don’t engage in this yet. As I recommend to every facility that I visit, “get your data ready” (to trust it for decision making). The systems will come hungry for data. Be ready to feed them good data. Also, going forward, with the speed of what’s coming in computing power and R&M tools, it’s going to be more revolution than evolution. EP
Based in Knoxville, Klaus M. Blache is director of the Reliability & Maintainability Center at the Univ. of Tennessee, and a research professor in the College of Engineering. Contact him at kblache@utk.edu.
View Comments