Improving Data Management Policies
Grant Gerke | October 10, 2018
Implementing data-management policies in manufacturing is striking fear into the hearts of many plant and corporate managers. This issue has elements of workforce, automation, continuous-improvement efforts, analytics, machine learning, and scalability, among others.
As manufacturers identify predictive-analytic opportunities and move toward real-time exception-based operator routines, data management is at the heart of the issue. How much equipment and line data is needed? How do companies merge instrument data from historians with production data in databases? What asset-management processes need to be in place? What about data storage and return-on-investment timeframes?
>> Part II | Real-life example of paring down plant data from David Wilmer
“Maintenance technicians are starting to push back when plant engineering looks to flood systems with data and the ‘more is better’ mentality,” explained David Wilmer, vice president of manufacturing systems at The Aquila Group, Sun Prairie, WI (the-aquila-group.com). “Technicians are requesting sampling intervals be increased from one second to one to five minutes as the vast majority of data only slows effective analysis.”
The Aquila Group provides training, system integration, and consulting services for plant-modernization projects. The company’s Green Light Monitoring System gathers machine-level data for measurement of overall equipment effectiveness (OEE) and feeds this data to the company’s Dynamic Machine Management (DMM) manufacturing execution system (MES).
Including maintenance technicians and operators on new data-management projects has become a common approach among manufacturing professionals in 2018. “The best candidate to drive the effort is someone who sees that asset-performance management is not just about reducing downtime or maintenance costs,” according to Dan Miklovic in his recent blog post, “5 Most Asked Questions on Smart Maintenance,” for LNS Research, Cambridge, MA. “This person also understands that asset performance drives higher performance and profitability,” Miklovic wrote.
Keeping an eye on the future of maintenance is what Jim Wetzel, interim CEO of the Clean Energy Smart Manufacturing Innovation Institute, Los Angeles, has been doing for quite a while. Wetzel spent many years at General Mills as the director of Global Reliability and implemented an MES for the corporate giant in the 1990s.
“Management is enamored with IIoT [the Industrial Internet of Things] or predictive maintenance, while ignoring the fact that their basic processes and data are inadequate,” Wetzel said. “Many organizations would like to skip over the essentials and just jump to apply the advanced concepts. This would be a big mistake and not deliver on the opportunity.”
“General Mills collected over 700 billion data points a day and probably 98% of those we never used every day,” says Wetzel in a recent phone interview. “But, I still would collect all this data for troubleshooting purposes and want to have that information there when troubleshooting a line or piece of equipment.”
Wetzel endorses continuous-improvement programs with an eye towards exception-based reporting. “Determining centerlines or recipes for key parameters of a system/unit of operations is typically a core component of a continuous improvement program,” he added. “Tracking the deviations from these centerlines can become a basic or starting methodology to exception reporting.”
For more discussion on the dynamics of data management and a real-life example of paring down plant data from David Wilmer, click here. EP
View Comments