Improve Data Integrity, Cut Waste
EP Editorial Staff | February 1, 2022
Inaccurate data in your CMMS and EAM systems is preventing your company from realizing the full benefits of digital transformation.
By Lee McClish, NTT GDC Americas
People in most operations have, at minimum, serious questions about their operational data integrity. Many simply don’t trust a lot of the data they generate. A large group of people wouldn’t know good data from bad and/or how to identify and use good data. Why does it matter? Poor data gets in the way of any digital-transformation effort.
Data integrity is the maintenance and assurance of data accuracy and consistency over its entire life cycle. It’s a critical aspect of the design, implementation, and usage of any system that stores, processes, or retrieves data.
Fundamentally, data integrity is maintained by designing a framework in which data cannot be manipulated. It is, in a wide context, a concept for recognizing the validity and preservation of all data. The term is also connected with database management.
Data-integrity importance
Maintaining data integrity is important for several reasons. Primarily, it ensures recoverability and searchability, traceability (to origin), and connectivity. Protecting the validity and accuracy of data also increases stability and performance while improving reusability and maintainability.
Why is it important to maintain data integrity? Imagine making an extremely important business decision that hinges on data that is entirely, or even partially, inaccurate. That resulting flawed decision can have a dramatic impact on the company’s bottom-line goals. Good data ensures reporting consistency across multiple plants and benchmarking with other industries.
What data are we talking about? Although data integrity is a broad concept that includes cybersecurity, physical safety, and database management, we’ll focus on CMMS and EAM systems. The concepts can certainly be extended to purchasing and other business systems that provide decision-making information.
What causes poor data integrity? Primary causes are inadequate review of new data imports and/or not paying attention to details while entering new data. Similar-looking characters, such as zeros and the letter “o” and the number 1 and lower-case “l” can cause confusion. Another factor is truncated data that results when more characters are provided than fit in a field.
What damage does poor integrity cause? Every report or dashboard could exhibit errors due to poor data. The result could be poor decision making that affects head count, capital planning, and maintenance scheduling.
What does good data look like? At minimum, good data is free of misspellings, has no blanks, uses N/A only if the data entry really doesn’t apply, and eliminates extraneous punctuation or extra characters. A key factor is assuring that information, such as manufacturer names and model/serial numbers, is entered accurately. Have you eliminated multiple spellings of manufacturer names?
How do you determine whether you have good data? Auditing the data in the system is essential. Start with imports of new data from construction or capital projects when new equipment is installed. Your CMMS/EAS should allow exporting data into Excel or other formats. Once exported, sort the fields and review the content to reveal the bad data.
Is good data a hardware/software issue or a people issue? In general, machines perform the functions for which they are designed. People provide the input. While stop gaps can be created to screen or highlight unacceptable data, people still are necessary to provide corrections and content checks. Manual entry is probably the largest contributor to bad or inconsistent data. Knowing the data-field format is also crucial. Is the field open or limited to numbers or letters? Are any special characters allowed? What is the field length limit? Are blanks allowed or desirable?
Usage Cases
Good data advances the maintenance/reliability and asset management of a facility. In addition to producing accurate reports for KPIs (key performance indicators) and reliability-analysis techniques such as MTBF (mean time between failure), MTTR (mean time to repair), FCR (failure/cause/remedy codes), tracking asset life cycle, and TCO (total cost of ownership), here are some examples of other benefits:
• A motor in a building a half-mile away needs to be replaced. There is no nameplate information available, so a physical visit becomes necessary.
• The company’s overdue PM KPI includes numerous PMs because the completed date was improperly filled in or selected.
• A $40k circuit breaker is ordered/received and it’s discovered that one letter in the catalog number was off. The new breaker won’t fit the given switchboard slot.
• A planner is working a job that comes up every seven years and sees that no details were entered into the job description in the last work order.
• A report is run for capital planning, concentrating on replacing a group of centrifugal pumps. They have been assigned a parent asset but two are missing data in that field, so the order is two pumps short.
• The quarterly MTBF report reveals the culprit is a building because that asset was chosen too often instead of drilling down to the actual affected asset.
• The monthly budget-review report is shared with management. It’s noticed a labor-intensive week-long machine rebuild project isn’t showing up because the labor wasn’t entered.
Hopefully, none of these examples have occurred in your plant, but they happen and illustrate the value of maintaining good data.
Improving data integrity
To improve data integrity, all fields within a CMMS or EAM need to be evaluated for the use of default values or be set as mandatory fields to ensure they are filled properly. This reduces many errors. When entering attribute data for new or current assets, have more than one detail-oriented person double check the entries. For work order and PM completion data that get touched by many hands, consider adding a metric section to a maintenance monthly KPI scorecard and track missing or incomplete data fields.
Since the main cause of bad data is manual entry, a culture set around good data entry is paramount. Setting up drop-down menus when applicable will significantly reduce variation. Another useful software feature is autofill.
If the data is important enough, a second check improves the outcome. Most company workflows require technicians to complete work orders and a planner or supervisor review to close the work order. This extra step serves as a good quality check.
Do you need to hire a data analyst? That would be the optimum solution, if you can afford it and it’s warranted. Your primary objective should be to establish a culture of proper data entry by the masses. If a data analyst is a possibility, the most effective person is a subject-matter expert turned into a data analyst. This SME will detect errors much faster than a pure data analyst since they understand what the information should look like.
Who’s responsible for the data? It would be great to hire an analyst and pin the responsibility on him/her, but everyone is responsible for their part. Whoever obtains the data, checks the data, and enters the data is responsible. Of course, the various data sets are owned by different departments, i.e., purchasing, production, maintenance, and reliability. It must be a team effort that’s driven from the top down.
What are the roles of IT and OT in the overall effort? In addition to providing the software and support, IT can be a resource for data integrity. IT is definitely the source for maintaining cybersecurity and should manage firmware updates. OT should also know if the latest firmware has been installed for their equipment. OT can have a huge impact on data collection for predictive analytics, but probably not so much for the CMMS/EAM data. Collaboration is key.
Like any process, there must be responsibility guidance if there is to be any hope of achieving data integrity. A formal written policy and procedure is highly recommended. Responsibilities, list of data types, software programs involved, and tracking/timing of upgrades should be defined to achieve your business goals. Like any good program, you can expect what you inspect. An audit program is necessary to ensure the guidance is followed. More volatile data should be audited frequently, whereas asset attributes could wait for an annual review. The audit should evolve as data improves or worsens.
What training is involved if you want to improve data? Based on responsibilities, the minimum training for all stakeholders should be annual. As the year progresses, based on the audit results, spot training should be conducted to steer progress toward better data integrity. Since the responsible parties are involved in different data sets, the training should be tailored and specific to each department, as opposed to company-wide generic training. Every company’s position along the data-integrity curve is different. Setting expectations based on the desired outcome and realized benefits is the first step. Documented guidance is necessary to help set those expectations. Training is vital to reinforce the expectations and provide feedback on compliance. Auditing closes the loop to ensure the expectations are being met. The full circle will place your department and company on the best path toward meaningful, reliable data. EP
Lee McClish is Manager, Maintenance and Reliability for NTT GDC Americas, headquartered in Sacramento, CA, (ragingwire.com), a global telecommunications and data center company. Previous positions were at BASF, Graphic Packaging and Packaging Corp. of America as a Reliability Engineer, Maintenance Engineer, RCM Manager, and Production Manager. McClish holds BSME, MBA, CMRP, CRL, CPMM degrees and certifications. He is the author of Maintenance Leadership 101.
View Comments