Usage of the MANTIS Service Platform Architecture for servo-driven press machine maintenance
A forming press, is a machine tool that changes the shape of a workpiece by the application of pressure. Throughout MANTIS project, FAGOR ARRASATE’s servo-driven press machine is being analysed, in order to set up strategies that will permit to carry out an online predictive maintenance of the press machine.
The proposed solution advocates for soft sensor based algorithms. The soft sensing algorithms provide information about the physical status of the components, as well as information about the performance of the systems. These algorithms take advantage of existing or available internal signals of the systems. The objective is to estimate inaccessible states and parameters of the systems using as few physical sensors as possible to acquire the necessary signals to work with.
Currently a characterization of the system components has been done, in a scaled test bench of real press machine. A servomotor has been analysed in order to extract information about its performance during press machine work cycles, such as the applied current, voltages and generated torque. Besides, the applied soft sensor algorithm has proven to be suitable for estimating the desired magnitudes of systems when some of the system parameters are unknown.
At the same time, the mechanical part of the press machine has been analysed in order to elaborate an analytical model of the mechanical part of the system. The purpose of this development is to relate the torque generated by the servomotor with the force applied by the press ram.
This information will be used to detect effects that occur during metal forming processes, such as unbalanced forces and the cutting shock effect, allowing to carry out the maintenance of the system.
Limit checking of measured variables in a monitored system is a method frequently used for fault detection. 3E uses it as a first step on its protocol of fault detection and diagnosis to know at which stage of a Photovoltaic plant actions need to be taken before any further deep analysis on the characteristics of the problems. Here, 3E illustrates the methodology used to apply it in their use case.
Photovoltaic (PV) plants are energy conversion systems which convert sunlight into electricity that is fed into the public utility grid. The physical structure and the important process variables of a PV plant measured when monitoring the performance of a photovoltaic (PV) plant are illustrated in Figure 1. The input variables of the process model are: the solar irradiance in the plane of the PV array (GPOA) and the ambient temperature (Tamb). Output variables from the process model point of view are: the PV module temperature (Tmod); the Direct Current voltage(VDC) and current (IDC) at the output of the PV array; the Alternating Current voltage (VAC); the power factor (PF); and the electric AC power to the grid (PAC).
Normalized performance parameters can be derived from the previously mentioned measurements and allow to quantify the energy flow and losses through the PV array per loss type. They are:
LA,V = YA,T – YA
with LA,I, LA,T, LA,V, the conversion losses due to current, temperature and voltage, respectively and Yr, YA,I, YA,T, YA the normalized energy yield from reference yield (based on irradiation from the sun), array yield after current losses, array yield after temperature losses and array yield after all array losses, respectively.
The main variables used for limit checking are solar irradiance in the plane of the PV array (GPOA), ambient temperature (Tamb), PV module temperature (Tmod), DC voltage and current at the output of the PV array (VDC, IDC) and electric AC power to the grid (PAC). The AC voltage (VAC) and power factor (PF) are not used for limit checking.
For checking the operational performance over different energy conversion steps, a performance loss ratio per step is defined. This performance loss ratio is computed for a given time span, e.g., a day up to several months. It is the useful energy lost over the energy conversion step divided by the energy available, i.e. the incoming solar energy on the PV array as represented by the solar irradiance in the plane of the PV array (GPOA); all normalized to standard rating conditions of the PV array. Accordingly, the overall performance of a PV plant is described by the performance ratio (PR), i.e., 100% minus the sum of all performance losses.
In practice, we compare the performance loss ratios from measurements to model-based performance loss ratios and thresholds. The model is fed with measured values of GPOA and Tamb. The model parameters can be set from data sheet parameters of the devices in the PV plant or identified from measurements from the plant in a healthy state. Accordingly, adequate limits can be derived either from tolerances on the data sheet parameters or from choosing percentiles from the healthy plant. Both the model-based performance loss ratios and their limit values vary depending on the PV plant and the weather during the evaluation period.
Figure 2 illustrates this application of limit checking for a PV plant located in Belgium. The current-related array losses (‘Array (current)’) in Figure 2a by far exceed the threshold. During a thorough maintenance action after this problem was detected, several smaller PV module failures were fixed. After maintenance action, all performance loss ratios were back within their expected ranges, yielding a much higher PR of 82.9% (Figure 2b).
In the frame of Mantis, the three Belgian partners (Sirris, Ilias Solutions and 3E) focused their work on exploiting intelligent data-driven technologies for failure detection and root-cause analysis. In this video we will show the work done and the results obtained within the Mantis project by the Belgian consortium:
This use case studied the analysis of sensor data from a brake press in order to facilitate its maintenance. Brake forming is the process of deforming a sheet of metal along an axis by pressing it between clamps. A single sheet metal may be subject to a sequence of bends resulting in complex metal parts such as electrical lighting posts and metal cabinets.
These machines require very accurate control so as to ensure the required bending precision that is in the order of tens of microns. They have stringent safety requirements that also impose certain restriction on its operation. In addition to this, the production efficiency is also a very important factor in its operation.
In order to ensure production quality under these stringent requirements, it is important to make sure that all of the machines’ components are in perfect working order. The goal of this use case in the MANTIS project is to use a set of sensors to detect failures and then inform the maintenance staff of these events. In this work we used a top of the line Greenbender model to implement and test a system that could accomplish these goals.
A multi-disciplinary team participated in the research and development of this use case. The use case owner is the machine tool manufacturer ADIRA that sells machines worldwide. ADIRA’s main goal is to improve the maintenance services they provide to their customers.
Research and development in the area of communications was jointly done by ISEP and UNINOVA. This included the IoT architecture, sensors, communication’s hardware and infrastructure deployment. Data processing and analytics was performed by INESC and ISEP. INESC focused on root cause analysis (RCA), remaining useful life (RUL) forecasting and anomaly detection. ISEP worked on knowledge based techniques for failure detection by developing and testing a decision support system. In addition to this ISEP also developed a Human Machine Interface (HMI) application that provides access to IoT infrastructure and several MANTIS services, which includes the notification of failures.
JSI and XLAB also provided valuable input and feedback concerning the initial research and design tasks of the communications infrastructure (real time data transmission) and the HMI (usability).
The MANTIS project has provided INESC with the opportunity to research, test and apply machine learning techniques in a real-world setting. Tasks included the detailed study of the machine tools’ processes and components, eliciting requirements and information from the domain experts and evaluating several machine learning algorithms. Due to the many challenges that were faced in identifying, collecting and using sensor data, only anomaly detection is currently being deployed in this use case.
A set of 11 conditions are being continually monitored for anomalies. For each anomaly two thresholds are being used to identify respectively small and large deviations from the expected behavior. Whenever such a deviation is detected, an alert is dispatched to the HMI where the users are notified. These monitoring conditions should allow ADIRA to detect failures in the hydraulic system, numeric controller and several electric components. In addition to this, oil temperature and machine vibrations are also being monitored.
The MANTIS system, which includes INESC’s analytics module, has been deployed as a set of services in the Cloud. Initial tests show good false positive rates. We are now in the process of performing on-line evaluations of the detection rates. We are confident that these results will serve as an important firsts step for ADIRA to enhance its products by using more sophisticated and effective data analytics methods.
Liebherr participates in the MANTIS project as an industrial partner with the division Liebherr hydraulic excavators. As expected, the main expertise of Liebherr consists in developing and optimizing excavators under consideration of different information sources. However, after the delivery of the excavator to the customer, every excavator generates event respectively message data automatically, which are actually mainly used for fault diagnostics but not extensively for further investigations.
This event data logger records among other things basically:
timestamp, when an event occurs
type of event, e.g. info, warning or error
unique message identifier of this event class
In combination with anonymized data concerning service partner and customer the following questions are relevant:
Is there a relation between the message patterns and the corresponding anonymized service partner?
Is there a relation between the message patterns and the anonymized customer?
Analysis approach for clustering
The related analysis was performed by the University of Groningen (RUG) as a research partner within the MANTIS project by considering each excavator as a stochastic message generator. In the context of preprocessing, the different messages were first counted per excavator and afterwards normalized with the total amount of occurrence per unique message identifier.
Based on the computed message probabilities per machine a k-means clustering was performed. To overcome initialization influences the clustering was performed 100 times with random initialization. The relationship of the cluster assignment of each excavator with the corresponding service partner or customer for each ‘k’ was subsequently examined with the chi-square test. The average estimate of the significance of the 100 model estimations of each ‘k’ then represented the quality function.
Results of cluster analysis
As can be seen in figure 1, there is no tendency for a relationship between the service partner and the messages per excavator. The average significance level is obviously higher than 0.05 and all of the single levels have nearly the same magnitude.
In contrast to figure 1, figure 2 shows a clear minimum at k=7, indicating that for this number of groups, it is likely that the distribution of machines over customers is not likely to be random. Although the p_signif – value is with 0.0588 slightly above the significance level of 0.05, the magnitude at k=7 is obviously lower than at other k-values.
In order to explain the minimum at k = 7, Liebherr decoded the anonymized customers and tried to find manually a description of the clusters. The cumbersome work did actually not yield to the expected result, namely the detection of short cluster descriptions, but rather to the recognition of customer data mismatching.
In summary the carried out analysis pointed out, that with the skillful usage of analysis algorithms superficial unmanageable data can disclose insights. But one of the basic requirements for later usage of the results is the proper preparation of data.
When analysing sensor data, you are typically confronted with different challenges relating to data quality. Here, we show you how these challenges can be dealt with and how we derive some initial insights from cleaned data via exploration techniques such as clustering.
Nowadays, especially with the advent of the Internet of Things (IoT), large quantities of sensor data are collected. Small sensors can be easily installed, on multipurpose industrial vehicles for instance, in order to measure a vast range of parameters. The collected data can serve many purposes, e.g. to predict system maintenance. However, when analysing it, you are typically confronted with different challenges relating to data quality, e.g. unrealistic or missing values, outliers, correlations and other typical and a-typical obstacles. The aim of this article is to show how these challenges can be dealt with and how we derive some initial insights from cleaned data via exploration techniques such as clustering.
Within the MANTIS project, Sirris is developing a general methodology that can be used to explore sensor data from a fleet of industrial assets. The main goal of the methodology is to profile asset usages, i.e. define separate groups of usages that share common characteristics. This can help experts to identify potential problems, which are not visually observable, when the resulting profiles are compared with the expected behaviour of the assets and when anomalies are detected.
In this article, we will describe the methodology of asset usage profiling for proactive maintenance prediction. The data used in this article is confidential and anonymised; we therefore cannot describe it in detail. It mainly consists of duration and resource consumption as well as a range of parameters measured via different sensors. For our analysis, we used Jupyter Notebook with appropriate libraries such as pandas, scipy and scikit-learn.
Sometimes data can be polluted, as it is collected from different sources and can contain duplicates, wrong values, empties and outliers, which should all be considered carefully. Therefore, the first natural step is to conduct an initial exploration of the data and to prepare a single reference dataset for advanced analysis, by cleaning the data, by means of visual and statistical methods, then by selecting the right attributes you wish to work with further.
In our example dataset, we find negative or zero-resource consumption, a situation that is obviously impossible, as shown in Figure 1. In our case, since there are few outliers of this type, we simply remove them from the dataset.
Figure 1 Zero or negative consumption
Another possible example is that of an erroneous date in the data. For example, dates may be too old compared to the rest of your dataset; future dates can even exist. Your decision to maintain, fix or remove wrong instances can depend on many factors, such as how big your dataset is, whether an erroneous date is very important at the current stage, etc. In our case, we maintain these instances since, at this moment, the date is not important for analysis and the percentage of this subset is very low.
Outliers are extreme values that deviate sufficiently from other observations and also need to be dealt with carefully. They can be detected visually and using statistical means. Sometimes we can simply remove them, sometimes we want to analyse them thoroughly. Visualising the data directly reveals some potential outliers; refer to the point in the upper right-hand corner in Figure 2. In our case, such high values for duration and consumption are impossible, as shown in Figure 3. Since it is the first record for this type of asset, it may have been entered manually for test purposes; we consequently choose to remove it.
Figure 2 Visual check for outliers
Figure 3 Impossible data
In Figure 4, we can see a positive linear correlation between consumption and duration, which is to be expected, although we still may find some outliers using the 3-sigma rule. This rule states that, for the normal distribution, approximately 99.7 percent of observations lie within 3 standard deviations of the mean. Then, based on Chebyshev’s Inequality, even in the case of non-normally distributed data, at least 88.8 percent of cases fall within 3-sigma intervals. Thus, we consider observations beyond 3-sigmas as outliers.
Figure 4 Data after cleaning
In Figure 5, we see that our data is quite normal, centred around 0, most values lying between -2 and 2. This means that the 3-sigma rule will show us more accurate results. You must normalise your data before applying this rule.
Figure 5 Distribution of normalised consumption/s
Results are shown in Figure 6. The reason for such a significant deviation from the average in consumption and duration of certain usages is to be discussed with a domain expert. One instance with very low consumption for a long duration raises particular questions (Figure 7).
Figure 6 3-sigma rule applied to normalised data
Figure 7 Very low consumption for its duration
Advanced data exploration
As previously stated, we are looking to profile asset usages in order to identify abnormal behaviour and therefore, along with duration and resource consumption, we also need to investigate the operational sensor data for each asset. This requires us to define groups of usages that share common characteristics; however, before doing so, we need to select a representative subset of data with the right sensors.
From the preliminary analysis, we observed that the number of sensors can differ between the assets and even between usages for the same asset. Therefore, for later modelling we need to exclusively select usages which always contain the same sensors, i.e. training a model requires vectors of the same length. To achieve this, we can use the following approach, as illustrated in Figure 8.
Figure 8 Selecting sensors
Each asset has a number of sensors that can differ from usage to usage, i.e. some modules can be removed or installed on the asset. Thus, we need to check the presence of these sensors across the whole dataset. Then, we select all usages with sensors that are present above a certain percentage, e.g. 95 percent, in the whole dataset. Let’s assume our dataset contains 17 sensors that are present in 95 percent of all usages. We select these sensors and discard those with lower presence percentages. This way, we create a vector of sensors of length 17. Since we decided to include sensors if they are 95 percent present, a limited number of usages may still be selected although they do not contain some of the selected sensors, i.e. you introduce gaps which are marked in yellow in the figure. To fix these gaps, you can either discard these usages or attribute values for missing sensors. Attributing can be complex, as you need to know what these sensors mean and how they are configured. In our case, these details are anonymised and these usages are consequently discarded. You may need to lower your presence percentage criteria in order to keep a sufficiently representative dataset for further analysis.
After the optimal subset is selected, we check the correlation of the remaining sensors. We do this because we want to remove redundant information and to simplify and speed up our calculations. Plotting a heatmap is a good way of visualising correlation. We do this for the remaining sensors as shown in Figure 9.
Figure 9 Sensor correlation heatmap
In our case, we have 17 sensors from which we select only 7 uncorrelated sensors and plot a scatter matrix, a second visualisation technique which allows us to view more details on the data. Refer to Figure 10.
Figure 10 Scatterplot matrix of uncorrelated sensors
Based on the selected sensors, we now try to characterise different usages for each asset, i.e. we can group usages across the assets based on their sensor values and, in this way, derive a profile for each group. To do this, we first apply hierarchical clustering to group the usages and plot the resulting dendrogram. Hierarchical clustering helps to identify the inner structure of the data and the dendrogram is a binary tree representation of the clustering result. Refer to Figure 11.
Figure 11 Dendrogram
On this graph, below distance 2 we see smaller clusters that are grouping ever closer to each other. Hence, we decide to split the data into 5 different clusters. You can also use silhouette analysis for selecting the best number of clusters.
In order to interpret the clustering, we also want to visualize them, but 7 sensors mean 7 dimensions and because we can’t plot in multidimensional space or it is too complex, we apply Principal Component Analysis or simply PCA in order to reduce the number of dimensions to 2. This allows us to visualize the results of clustering, which is shown on Figure 12. Good clustering means that clusters should be more or less well separated, i.e. similar colours are close to one another or not mixed too much with other colours, and this is what we also see in the figure.
Figure 12 PCA plot
After the clustering is complete, we can characterise usages. This can be done using different strategies. The simple method consists in taking the mean of the sensor values for each cluster (i.e. we calculate a centroid) to define a representative usage.
The last step involves validating the clusters. We can cross-check clustering with the consumption/duration of usages. For instance, we may expect all outliers to fall within one specific cluster, or expect some other more or less obvious patterns, hence rendering our clusters meaningful. In Figure 13 below, we can observe that the 5 clusters, i.e. 5 types of usages, correspond, to an extent but not entirely, to consumption/duration behaviour. We can see purple spots at the bottom and green spots at the top.
Figure 13 Relationship between clusters and consumption/duration
At this stage, some interesting outliers were detected in consumption/duration relationships, which can be stressed with the objectives the assets were used for. We have found clusters that represent typical usages according to data. Result validation can be improved by integrating additional data, such as maintenance data, into analysis. Furthermore, results can be validated and confidently concluded by the domain experts from Ilias Solutions, the industrial partner we are supporting for their data exploitation.
Wireless communications are used in many industrial maintenance scenarios and are practically the only choice for transmitting data from rotating sensors. One such example in MANTIS is the shaft-mounted torque sensor in the press machine by FAGOR ARRASATE, shown below. The sensor sends the data to the receiving antenna mounted vertically from the machine’s ceiling.
The wireless signal must travel from transmitting antenna to the receiving one without overly high attenuation. Furthermore, the signal can travel via different paths, such that out-of-phase components attenuate each other. This so-called multipath effect is particularly strong in industrial environments that contain many large metallic surfaces. Correct placement of the antennas is therefore crucial. A good placement can often be found experimentally through trial and error. However, in certain cases where repeatedly re-locating a receiver or transmitter is not practical, a numerical simulation of radio wave propagation can be used instead.
As an illustration of concept, we present here a simulation of antenna placement for a simplified model of a part of press machine, shown below. The orange downward-pointing arrow shows the placement of the sensor and the orientation of its transmitting antenna. The white-and-gray shaded rectangle above the sensor is the receptor plane with the result of the simulation, as will be explained below. Note that the model is enclosed in a rectangular box from all six sides, but the front and top sides are not shown here in order to be able to see inside.
The simulation algorithm is based on the ray tracing method-of-images enhanced by double refraction modeling. It is computationally complex but highly parallel, and has thus been adopted to run on GPUs. In our case the runtime of a single simulation is approximately one minute on a high-end gaming GPU card. The simulator itself was developed by the Jožef Stefan Institute as part of the national research project ART (Advanced Ray-Tracing Techniques in Radio Environment Characterization and Radio Localization), co-funded by the Slovenian Research Agency and XLAB.
In order to use it in MANTIS, XLAB has developed a Blender plug-in that exports the model into the proprietary simulator format, and a similar import plug-in to import back the simulation results. We then ran a series of experiments simulating the rotation of the shaft, thereby changing both the position and the orientation of the transmitter antenna. The signal wavelength was 0.1225 m, corresponding to Bluetooth/WiFi frequency range. The video below shows the result. The color scale is from 0 dB loss (black) to 100 dB (white). However, values over 90 dB are replaced by red to highlight the areas that will most probably not have acceptable reception with common BLE or WiFi antenna setups.
Clearly visible are the vertical belts resulting from the obstruction by and reflections from the shafts and the diagonal patterns of reflections from the slanted parts in the model. Most importantly, within the belts of good reception we can see strong multipath interference patterns. Some of the individual, isolated red and white points are artifacts of the simulation where incidentally no ray has reached that exact area. These artifacts could be reduced by increasing the number of cast rays, which would also slow down the simulation considerably. Finally, it has to be noted that this experiment was only intended as an illustration of concept and no validation or comparison to actual signal measurement in the field was performed yet.
Goizper S. Coop’s products are mechanical components (clutch brakes, gear boxes, indexing units…) installed within different kind of productive machines. These machines are designed to produce continuously and unplanned downtimes generate high costs. These components are the key part of some of the mentioned productive machines and the relevant component’s health influences directly within the machine status.
Breakdown of Components
Furthermore, if one of these components fails, it takes a long time while a new one is sent to the customer’s facilities, removed the old one and set up the new one. In these cases, the production asset maintenance means lot of expenses for customers and suppliers.
MANTIS for predictive maintenance
MANTIS platform provides an online and future view of these components’ health. Smart sensors installed at the mechanical component are connected to Monitoring and Alerting, which is performed automatically, within the smart-G box located next to the mechanical component. Then, this Big Data is processed in the Cloud and through different Maintenance data analytics the status and future trend of the component health is obtained as an output.
Obviously, the introduction of this Cyber Physical System will not eliminate all machine breakdowns, but it will help in order to reduce considerably machine unplanned downtimes, so that the customer and supplier will be able to plan their maintenance tasks and reduce these kind of stops.
Within the MANTIS ECSEL project, Goizper has collaborated close to one of its customers, Fagor Arrasate, trying to improve the real inconveniences and reduce expenses that unplanned downtimes cause in both firms.
Compact Excavators are often rented on an hourly or daily rate. No meters are used, which means that for billing only calendar hours or days are used. For maintenance, the system has an “engine hour” meter, but this gives indicator only when the system is running (idle or driving or operating).
A proposal is to introduce other meters for more precise counters on the actual use of the machine. One sensor is proposed for the solution, which provides a very cheap way of getting much more usage data.
For the rental case a “power by the hour” rate could be more efficient. I.e. the end customer pays for the required usage or wear of the machinery and not just number of hours the machine is reserved. It would give a more fair pricing model, since the real cost of running the machinery is mostly due to maintenance. This would give the user an incitement of taking care of the machine while using it. It also gives a better way to estimate the need for maintenance or to balance out the usage of equipment.
For other cases, a simple sensor could give benefits of getting higher fleet availability, lowering operating costs etc. by doing the following:
Machine health and how to predict asset failure (predictive maintenance)
Prevent or detect abuse
Provide data for warranty models
Provide data for fleet management/optimization
All of the above mentioned points can be addressed with a simple and robust IMU.
Proof of concept thesis
For this proof of concept, we will provide a thesis, to test the data collection and analytic capability of such a system:
“We believe that we can measure how many hours a hammer and tracks / undercarriage has been used on a compact excavator by measuring the vibration pattern”
As proof of concept we want to be able to detect the following states
Engine Off – ID 4001
Idle – low RPM ID 4002
Idle – High RPM ID 4003
Driving – Turtle gear ID 4010
Driving – Rabbit gear ID 4011
Driving – Slalom ID 4012
Hammer – ID 4020
Other states (such as abuse or hard usage) could also be detected.
The Machine Learning approach
A single IMU sensor is installed in the frame of the vehicle. Data is collected with high resolution and high sampling frequency. Data was collected on a small embedded device in the vehicle.
Model creation data
A series of tests with beforementioned states were made. The data was labeled with each state.
After data labeling, a decision tree was created using statistical features of the data.
The decision tree can now be applied to data collected in real time, on the embedded device.
A new series of tests were made. This data was again labeled with each state. Data was collected and parsed with the decision tree generated with the model data from before (with fixed data chunk sizes).
In the figure below, the results from the algorithms can be seen.
On the top row of bars, the data labels (the truth) are seen, colored. In the next row of bars, the detected states are colored. The bottom graph is a visualization of part of the collected raw data.
As seen, the colors match with very high precision. Only in the beginning and end of the states there are small errors. This is most likely because of the data labeling (i.e. as the labels were created manually with a stopwatch they may not be completely timely)
The IMU sensors and embedded device mounted on the Compact Excavator is able to provide data for machine learning and recognition of at least 6 different usage patterns:
The usage information can now be collected, and a “power by the hour” renting concept can be introduced. For example, the renting company can provide an app where the customer can specify how much hammering they want, and how much driving etc. Then a much lower price can be provided. If data is collected and transmitted through GSM, the app can even update in real time, showing usage data.
This means that the operator of the vehicle can in real time see how much usage has been spent. A warning could be provided when i.e. when 80% of the hammering hours have been spent, similar to traveling with a mobile phone abroad and there is a fixed number of Megabytes available.
The whole setup was made within a few hours. Mounting of the system took 30 minutes, collecting model creation data took 1 hour. Creating the models took 30 minutes. And testing the system took another hour. We started in the morning, and before lunch time, everything was mounted, calibrated and validated and ready for use.
This sensor and embedded system provides a very easy way of providing actual and valid usage information on mechanical systems.
It can easily detect more states. The meters provided could also be summarized, which could be used to provide the operator with information on when it is time to replace the hammer – before it actually breaks. The time saving from this alone are enough to pay for the system.
Wapice is a Finnish company specialized in providing software and hardware solutions for industrial companies for wide variety of purposes. We have developed remote management and condition monitoring solutions since beginning and our knowledge of this business domain has evolved into our own Internet of Things platform called IoT-Ticket. Today IoT-Ticket is a complete industrial IoT suite that includes everything required from acquiring the data to visualizing and analyzing the asset lifetime critical information.
Why condition monitoring
In predictive maintenance the goal is to prevent unexpected equipment failures by scheduling maintenance actions optimally. When successful, it is possible to reduce unplanned stops in equipment operational usage and save money through higher utilization rate, extended equipment lifetime and personnel and spare part costs. Succeeding in this task requires deep understanding of the asset behaviour, statistical information of the equipment usage and knowledge about the wear models combined with measurements that reveal the equipment current state of health. Earlier these measurements were carried out periodically by trained experts with special equipment, but now the modern IoT technologies are making it possible to gather real time information about the field devices all the time (i.e. condition monitoring). While this increases the availability of data, it creates another challenge: How to process massive amounts of data so that right information is found in the right time. In condition monitoring process the gathered data should optimally be processed so that amount of data transferred towards the uplink decreases while the understanding of the data increases.
This article describes how modern IoT technologies help in condition monitoring related processes and how data aggregation solutions makes it possible to share condition monitoring related information between different vendors. This further improves operational efficiency by enabling real time condition monitoring not only in asset level but also in plant or fleet level, where service operators must understand the behaviour and available lifetime of assets coming from different manufacturers.
WRM247+ data collector for edge analysis
The first link in the condition monitoring chain is the hardware and sensors. In order to measure the physical phenomena behind the wear of the asset a set of sensors is required to sample and capture the data from monitored devices. This data must be buffered locally, pre-processed and finally only the crucial information must be transferred to the server, where the physical phenomena can be identified from the signals. Depending on application area a different types and models of sensors are required to capture just the relevant information. Also depending on the physical phenomena a different kind of analysis methods are required. Due to these reasons the various measurement systems have so far been custom tailored according to the target. This approach of course works, but designing custom tailored measurement systems is time consuming and expensive. Our approach to overcome these problems has been to implement such IoT building blocks that adapt into wide variety of purposes and can be easily and flexibly taken into use. One of the cornerstones in our system is the flexibility and user friendliness.
In the hardware side our IoT platform offers several approaches. WRM247+ measurement and communication device is our hardware reference design that allows connecting a wide variety of industrial sensors using either wired or wireless communication methods and also provides local buffering and pre-processing of data as well as communication to the server. Examples of supported standard protocols are CAN, CANOpen, Modbus, ModbusTCP, 1-Wire and digital/analog IOs. This device is an excellent starting point for most common industrial measurement purposes.
In Mantis project Wapice has been investigating the interoperability of the wireless and wired sensors. In use case 3.3. Conventional Energy Production we will demonstrate the fusion of the wireless Bluetooth Low Energy technology and wired high accuracy vibration measurements. In order to achieve this we have built a support for connecting IEPE standard vibration sensors to the WRM247+ device. The device supports any industrial IEPE standard sensor, which makes it possible to select a suitable sensor according to the application area. Additionally we have also built a support for connecting a network of BLE sensors to the device. In use case the purpose of this arrangement is to gather temperature information around the flue-gas circulation blower using the wireless BLE sensors and perform vibration measurements from the rolling bearing. The temperature measurements reveal possible problems e.g. in lubrication of the bearing and possibly allow actions to be taken before a catastrophic failure happens.
In case WRM247+ device is not suitable for purpose, it is possible to integrate custom devices easily into IoT-Ticket using the REST API available. For this purpose we provide full documentation and free developer libraries for several programming languages. The list of available libraries includes for example C/C++, Python, Qt, Java and C#. Other integration methods include for example OPC or OPC UA and Libelium sensor platform, that supports e.g. wireless LoRa sensors. In addition Wapice has a long experience in designing machine-to-machine (M2M) solutions including PCB layout, embedded software design and protocol implementation, so we also offer you the possibility to get tailored Internet of Things hardware or embedded software that fully suit your needs.
Iot-Ticket portal for Back-End tools
In the back-end side IoT-Ticket provides all necessary tools for visualizing and analyzing data. Our tools are web based and require no installation: Simply login, create and explore!
The Dashboard allows users to interact securely with remote devices, check their status, view reports or get status updates on the current operational performance. It can be utilized in various scenarios – e.g. vehicle tracking, real time plant or machinery monitoring and control. As many Dashboard Pages are available the user can switch between different contexts and drill into information starting from enterprise level to sites, assets and data nodes. The Dashboard also includes two powerful tools for content creation: Interface Designer and Dataflow Editor.
Using the Interface Designer user can raw new elements or add images, gauges, charts, tables, Sankey diagrams, buttons and many other elements onto the Dashboard. Those elements can then be easily connected to data by dragging and dropping Data Tags onto them.
The Dataflow editor is an IEC 61131-3 inspired, web-based, graphical block programming editor that seamlessly integrates to the Interface Designer. A user can design the dataflow by connecting function blocks to implement complex logic operations which then can be used to execute control actions or routed to user interface elements for monitoring purposes.
In Use Case 3.3. Conventional Energy Production Wapice – together with Finnish partners – demonstrates Cloud-to-Cloud integration in Mantis platform using the IoT-Ticket platform tools. In this use case LapinAMK and VTT have jointly setup a Microsoft Azure based MIMOSA data aggregation database. The plan is to share condition monitoring related KPI information through MIMOSA database, which allows sharing data through REST API. Devices may either push data directly to MIMOSA or through local clouds.
IoT-Ticket allows communication to REST sources using Interface Designer graphical flow programming tools. Getting data from REST sources is done by simply creating a background server flow that contains a trigger and REST block. The REST block is configured with username and password that allow authentication to REST source, source URL and REST method that contains XML/JSON payload. From the REST response the data value is parsed and output to data charts or forwarded to further processing. Additionally virtual data tags allow forwarding the data into IoT-Ticket system. By configuring the flow to be run in server mode, the flow is run silently in background all the time. The operation interval is configured using the timer block, which fires the REST block in certain intervals. An example video below shows how Cloud-to-Cloud communication between MIMOSA and IoT-Ticket is established in Use Case 3.3. In this example video sinusoidal test data originates from LapinAMK enterprise and tachometer RPM reading from system under test using WRM247+ device.
The reporting and analytics tools add on to the platform features. The report editor integrates seamlessly into the Dashboard and offers a user the possibility to create or modify content. The user can draw new elements or add image, gauges, charts, tables, Sankey diagrams, buttons and many other elements onto the report. Those elements can then be easily connected to data by dragging and dropping Data Tags onto them. Analytics tool is also integrated to the Dashboard and supports you in understanding your data better.
Benefits of IoT in condition monitoring
Typically condition monitoring related data has been scattered around in separate information systems and it has been very hard or even impossible to create a single view of the all relevant data or correlate between information scattered in different databases. MIMOSA is an information exchange standard that allows classifying and sharing condition monitoring related data between enterprise systems. It answers the data sharing problematics by allowing aggregation of crucial information into single location in a uniform and understandable format. When interfaced with modern REST based information sharing technologies that utilize for example JSON or XML based messaging it is surprisingly easy to collect and share crucial information using single aggregation database. When accompanied with modern web based Industrial IoT tools it is then easy to visualize data, create reports or perform further analysis using only the crucial information available.
In this blog posting I have highlighted some examples how Industrial IoT building blocks help you to gather relevant condition monitoring information, create integrations between data sources and aggregate business relevant information into single location. Focusing on the crucial information allows you to understand better your assets and predict the maintenance needs. This is needed, when optimizing the value of your business!
MANTIS; Cyber Physical System based Proactive Collaborative Maintenance.
This project has received funding from the ECSEL Joint Undertaking under grant agreement No 662189. This Joint Undertaking receives support from the European Union’s Horizon 2020 research and innovation programme and Spain, Finland, Denmark, Belgium, Netherlands, Portugal, Italy, Austria, United Kingdom, Hungary, Slovenia, Germany.