The MANTIS project has the ambition to reach out to a large number of communities. This goal can be supported by multimedia material, and the MANTIS consortium had decided, from the inception of the project, to produce videos to explain its results to different kinds of public, spanning from researchers in the academia, to engineers in the industry, and to the general public.
Different groups of partners have produced videos, and the target KPI of 3 videos was easily surpassed. The consortium decided to embark in the effort of creating a video of the MANTIS project at large, and in this context a professional video company was contacted to do the technical work.
The script of the video was written by Michele Albano (ISEP), and reviewed by many partners of the MANTIS consortium. The music track in the video was written and played by Erkki Jantunen (VTT) and Urko Zurutuza (MGEP).
The company Ideias com Pernas took care of creating an animation to lay down the motivation behind the MANTIS project, of the voice track, and of putting together all the material into the final video:
Usage of the MANTIS Service Platform Architecture for servo-driven press machine maintenance
A forming press, is a machine tool that changes the shape of a workpiece by the application of pressure. Throughout MANTIS project, FAGOR ARRASATE’s servo-driven press machine is being analysed, in order to set up strategies that will permit to carry out an online predictive maintenance of the press machine.
The proposed solution advocates for soft sensor based algorithms. The soft sensing algorithms provide information about the physical status of the components, as well as information about the performance of the systems. These algorithms take advantage of existing or available internal signals of the systems. The objective is to estimate inaccessible states and parameters of the systems using as few physical sensors as possible to acquire the necessary signals to work with.
Currently a characterization of the system components has been done, in a scaled test bench of real press machine. A servomotor has been analysed in order to extract information about its performance during press machine work cycles, such as the applied current, voltages and generated torque. Besides, the applied soft sensor algorithm has proven to be suitable for estimating the desired magnitudes of systems when some of the system parameters are unknown.
At the same time, the mechanical part of the press machine has been analysed in order to elaborate an analytical model of the mechanical part of the system. The purpose of this development is to relate the torque generated by the servomotor with the force applied by the press ram.
This information will be used to detect effects that occur during metal forming processes, such as unbalanced forces and the cutting shock effect, allowing to carry out the maintenance of the system.
Press machine manufacturers are confronted with increasing technological and cost pressure: Many customers demand faster and ever more precise presses. The more precisely force is applied in a press machine, the higher the quality of the manufactured part is. This is why, increasing the press machines accuracy is one of the most important challenges for manufacturers of these assets. In addition, the market requires increasingly faster press machines that, at the same time, offer higher bandwidth to increase production output in existing systems.
Nowadays, the torque of the press gear shaft is measured indirectly from the force that is applied in the connecting rod. This measure is quite precise but it needs to be continuously recalibrated to obtain the accurate quality. To solve this problem, the technological answer is to measure the torque directly by using wireless sensors placed in the press gear shaft.
As a solution, IKERLAN has designed and manufactured a prototype of a shaft-adapted wireless sensor node which comprises a transducer based on torque oriented gauges, a signal conditioning circuit and a signal processing software, the latter allowing a local preprocessing and treatment of the collected data, by means of intelligent functions.
The design process has been made following two main phases:
Phase 1: Testbed validation
Before starting the development of the wireless torque sensor, a previous validation was made in testbeds both in IKERLAN. This was an initial requirement to ensure the proper functioning of gauges, generic electronics and wireless for working in press-based conditions.
With regards to wireless communications, two main challenges were tested: (i) signal attenuation due to the rotation of the emitter around the shaft and (ii) multipath fading due to RF signal reflections in the metallic (steal) elements of the head of the press in which the torque sensor is to be installed. Tests were successful being outlined that depending on the angular position of the shaft, and therefore, on the relative position of the transmission and reception antennas, more or less amount of power is received periodically.
A similar test has been performed in the Try-Out press machine from FAGOR ARRASATE. In this case, both the emitter and the reception antenna have been placed in a realistic place within the head of the press machine as in can be seen in the next figure.
Once the top cover is closed, creating a complete metallic case, it was observed how the received signal was not as clean as the one in the previous measurements due to multipath reflections. The statistical features obtained from this signals were used in the selection of the most suitable wireless communication technology to be used in the torque sensor.
Phase 2: Design and development
Once the concept and the elements of the device (gauges, conditioning and processing, radio) were validated in a rotational environment, the system design and development was started. A prototype of the wireless sensor node was designed and developed. It consists of a single PCB with the necessary interfaces to attach torque gauges, besides the conditioning, processing and wireless communication electronics. The whole system is powered by a rechargeable lithium ion polymer battery and it is encapsulated and protected by a plastic cover in the shape of the press’ secondary driving shaft, which is prepared to avoid oil leakage.
Once the design and fabrication of the wireless torque sensor was finished, the sensor was installed in the Try-Out press machine from FAGOR ARRASATE.
First tests regarding the overall performance of the sensor were successful providing signals with the torque measurements were sent to an external laptop were they could be visualised. Later, the complete validation process was carried out. This process aimed to test the accuracy of the sensor’s measurement against several torque and speeds and the robustness of the wireless communication protocol employed.
Several tests were carried out combining different values of the nominal torque and speed of the press as well as several configurations of the sensing electronics. These results were compared with an estimation of the torque at the drive shaft obtained from an overload pressure evolution analysis. Besides, some measurements regarding the performance of the wireless communication were also taken. As an example of it, Figure 6, shows the results of the test in which the maximum torque (87%) and the maximum speed (100%) were configured at the press machine.
The measured torque values at almost each stroke are close to 60kN•m which corresponds to the estimated torque values. Moreover, the clutch brake engage and disengage events are still captured.
In general terms, it is considered that the obtained results are valid, taking into account that they are compared with estimated values and not with another measurement obtained by a commercial system. However, regarding the amount of data shown at the measured torque values, some data can be missed either on the positive or the negative peaks, as the same amplitude should be acquired for each stroke. With regards to wireless communications, in general the expected performance in terms of data throughput and network availability has been achieved. However, the loss of some data packets has been detected which should be corrected in future versions.
As an important upgrade of the system, it is expected that the inclusion of antenna diversity inside the shell of the press machine will improve the communication between emitter and receiver. This new configuration should decrease the number of packets lost.
Another point of improvement is the detection of low depths of penetration. To achieve this, new tests will be performed modifying the gain parameter of the wireless sensor node and the obtained results will be analyzed.
Last but not least, the energy management of the system is a key feature if it is pretended to leave it permanently attached to the press machine’s drive shaft. With this in mind, a more energy efficient redesign will be carried out together with the development of an energy harvesting system to power up the wireless sensor node
In the frame of Mantis, the three Belgian partners (Sirris, Ilias Solutions and 3E) focused their work on exploiting intelligent data-driven technologies for failure detection and root-cause analysis. In this video we will show the work done and the results obtained within the Mantis project by the Belgian consortium:
The virtual reality and augmented reality market is developing fast both in Finland and on a global level. These technologies are emerging markets, especially on the consumer side, and are likely to affect maintenance related work in one form or another. These technologies also have a lot of innovation potential. Virtual reality and augmented reality technologies has been used in the advanced HMI approaches for the Finnish conventional energy production use case. Operation and Maintenance team of the Lapland University of Applied Sciences (LUAS) has made a technical implementation as a part of that use case. Mantis project and the VR/AR demo will be on display at the two different kind of industrial business events in Finland.
Industrial events in Kemi and Oulu during spring 2018
AR and VR demonstrators related to the Fortum use case will be presented at the “Rikasta Pohjoista 2018” (Rich North 2018) event and “Northern Industry 2018” events for industry professionals. “Rikasta Pohjoista 2018” seminar will be held in Kemi on 18th and 19th of April 2018. “Rikasta Pohjoista 2018” is the Operation & Maintenance and Mining industry event, where the theme changes every year. The theme of 2018 is “Reliability & International Mining Industry and Recycling Economy”. The event will bring together over 100 industrial professionals from mining, steel, forest, energy and recycling industry from Finnish companies. The two-day event has been arranged since 2015 at the Lapland University of Applied Sciences. Before that, the event was a one-day event, and only centered on maintenance.
“Northern Industry 2018” is the largest and only industrial trade fair in northern part of Finland. It will be held in Oulu at the 23rd and 24th of May 2018. Event gathers 5000 professionals from mining, steel and forest industry as well as from energy sector, chemical industry and from numerous service providers and equipment manufacturers. The event organizer is Expomark Oy.
Virtual Reality, Augmented Reality as Human Machine Interface
The distinction between industrial maintenance related usage of VR and AR approaches can be roughly defined between factory-floor and back-office usage, where AR is more applicable for factory-floor and on the field maintenance tasks and guidance and as well as the use in maintenance monitoring. VR is inherently more suited for back-office and other office activities such as training and planning. VR is not so handy and applicable being mobile device at the factory floor. Both AR and VR solutions could be utilized as a part of collaborative decision-making. Experts around the world could for example communicate with each other using avatars in a virtual space. AR could be used, for instance locally to observe machinery status. It could allow for new business opportunities in maintenance related support and collaboration.
LUAS first made an AR approach to the use case due to it being more suitable for use in maintenance monitoring on the field and factory floor. The AR approach was done using the Google Tango platform that consisted of a comprehensive Unity compatible AR SDK and a special hardware platform that comprised of an IR dot matrix projector and a special camera capable of measuring the time-of-flight of the independent dots projected onto a shape. The combination of the SDK and the hardware platform was used to mitigate inherent drift in any solely IMU (Inertial Measurement Unit) based positioning solution.
The AR application was named AHMI (Advanced HMI) and it would enable users to create a virtual representation of the flue gas recirculation blower at Fortum’s Järvenpää power plant and retrieve real-world measurement data onto the measurement points attached to the virtual, 3D model. It also supports adding virtual measurement points to real-world objects using real-world measurement data retrieved from MIMOSA database. The measurement data comes from Nome’s sensors that are stored in the common MIMOSA database. The measurement data is retrieved through a REST interface developed by LUAS. Figure 1 presents the 3D model of flue gas recirculation blower placed at a meeting room table.
The VR application was built solely on the HTC Vive platform, however it could be transferred to other VR hardware platforms such as Oculus, possibly even the Microsoft mixed reality platform. HTC Vive controllers were used at first, but they were replaced with Leap Motion controls, which is a structured light-based IR projection camera system intended for recognizing hand positions and gestures. This was incorporated into the VR demo system to replace the hindrance of the controllers to allow for an immediately more intuitive approaching using the user’s bare hands as a control tool. The VR version creates a virtual world that consists of the same elements used in the AR approach with added VR only features. The VR application could be used to monitor the measurement data and as a training tool for maintenance professionals for example to guide in machine disassembly.
Figure 2 shows the windows displaying real-world vibration data in the VR application. In the virtual world, the users can open and reposition these windows to their own preferences. They also have a snap functionality that allows for neat alignment of all windows. Users can manipulate these windows naturally using their own hands rendered in VR with the Leap Motion.
VR/AR -world offer new kind of experience to the operation and maintenance work and to the people who work in the factory-floor or back office. We at LUAS believe we have a good opportunity to introduce this Fortum’s Järvenpää use case demo and VR/AR technology to industry professionals in events at Kemi and Oulu. It also gives a fantastic opportunity to disseminate what we have learned during the Mantis project to a wider audience.
The Finnish use-case under the MANTIS project concentrates on proactive maintenance solutions in the field of conventional energy production. The industry is moving towards smaller distributed plants with less on-site staff and thus, the ability to deploy conventional CBM strategies has declined. However, availability is still a major factor in power generation efficiency and plant feasibility. Therefore, new kind of energy production asset maintenance solutions applicable also for less critical components are required.
Five industrial and academic partners, namely Fortum, Lapland University of Applied Sciences (LUAS), Nome, VTT and Wapice, form the Finnish consortium in the MANTIS project. The Finnish use-case of conventional energy production is centered on a flue gas blower in Fortum’s Järvenpää power plant. Power plants have a large array of rotating machinery, whose reliability greatly affect on the overall reliability of the plant. As such, the blower offers a valid testing environment for collaborative maintenance solutions developed by the Finnish partners. The blower has been instrumented with vibration sensors, virtual sensors and local data collectors provided by Nome, Wapice and VTT. The measurement data is stored in the MIMOSA data model based MANTIS database via REST interface developed by LUAS. The collected data can be distributed to individual systems across organizational boundaries for analysis purposes. The partners of the conventional energy production use-case have integrated their own analytic tools, such as Fortum’s TOPi, Nome’s NMAS and Wapice’s IoT-Ticket, to the MANTIS database, as illustrated in figure 1, and tested the system architecture successfully in practice.
The MANTIS project has offered a great opportunity for the conventional energy production use-case partners to develop their own HMIs that can be integrated to different fields of proactive maintenance. The development work continues in the third and last phase of the MANTIS project, as some advanced visualization approaches, including virtual reality and augmented reality applications, are piloted and integrated to the HMIs. The piloted cloud architecture from Fortum’s Järvenpää power plant will also be tested in larger scale in another entire power plant. The data collection will be extended to cover a wider range of equipment and process variables to enable plant-wide monitoring of assets and proactive maintenance strategies. In addition, the partners are developing their analytic tools further to provide solutions capable of diagnostics and prognostics required in advanced maintenance.
As we do very 4 months, we had a new consortium meeting in January. This time we met at the beautiful Ghent city, and were fantastically hosted by our partner SIRRIS.
We are approaching to the end of the project, and thus, decided not to make more parallel sessions, so everyone would perfectly be aware of the activities of all Work Packages. Also, the Open Sessions, where we always showroom our last developments in an interactive way, showed no posters but plenty of live demos.
Of course, we continue working hard till the end of April!
Next, and last meeting, in Budapest, hosted by BMU & AITIA!
When analysing sensor data, you are typically confronted with different challenges relating to data quality. Here, we show you how these challenges can be dealt with and how we derive some initial insights from cleaned data via exploration techniques such as clustering.
Nowadays, especially with the advent of the Internet of Things (IoT), large quantities of sensor data are collected. Small sensors can be easily installed, on multipurpose industrial vehicles for instance, in order to measure a vast range of parameters. The collected data can serve many purposes, e.g. to predict system maintenance. However, when analysing it, you are typically confronted with different challenges relating to data quality, e.g. unrealistic or missing values, outliers, correlations and other typical and a-typical obstacles. The aim of this article is to show how these challenges can be dealt with and how we derive some initial insights from cleaned data via exploration techniques such as clustering.
Within the MANTIS project, Sirris is developing a general methodology that can be used to explore sensor data from a fleet of industrial assets. The main goal of the methodology is to profile asset usages, i.e. define separate groups of usages that share common characteristics. This can help experts to identify potential problems, which are not visually observable, when the resulting profiles are compared with the expected behaviour of the assets and when anomalies are detected.
In this article, we will describe the methodology of asset usage profiling for proactive maintenance prediction. The data used in this article is confidential and anonymised; we therefore cannot describe it in detail. It mainly consists of duration and resource consumption as well as a range of parameters measured via different sensors. For our analysis, we used Jupyter Notebook with appropriate libraries such as pandas, scipy and scikit-learn.
Sometimes data can be polluted, as it is collected from different sources and can contain duplicates, wrong values, empties and outliers, which should all be considered carefully. Therefore, the first natural step is to conduct an initial exploration of the data and to prepare a single reference dataset for advanced analysis, by cleaning the data, by means of visual and statistical methods, then by selecting the right attributes you wish to work with further.
In our example dataset, we find negative or zero-resource consumption, a situation that is obviously impossible, as shown in Figure 1. In our case, since there are few outliers of this type, we simply remove them from the dataset.
Figure 1 Zero or negative consumption
Another possible example is that of an erroneous date in the data. For example, dates may be too old compared to the rest of your dataset; future dates can even exist. Your decision to maintain, fix or remove wrong instances can depend on many factors, such as how big your dataset is, whether an erroneous date is very important at the current stage, etc. In our case, we maintain these instances since, at this moment, the date is not important for analysis and the percentage of this subset is very low.
Outliers are extreme values that deviate sufficiently from other observations and also need to be dealt with carefully. They can be detected visually and using statistical means. Sometimes we can simply remove them, sometimes we want to analyse them thoroughly. Visualising the data directly reveals some potential outliers; refer to the point in the upper right-hand corner in Figure 2. In our case, such high values for duration and consumption are impossible, as shown in Figure 3. Since it is the first record for this type of asset, it may have been entered manually for test purposes; we consequently choose to remove it.
Figure 2 Visual check for outliers
Figure 3 Impossible data
In Figure 4, we can see a positive linear correlation between consumption and duration, which is to be expected, although we still may find some outliers using the 3-sigma rule. This rule states that, for the normal distribution, approximately 99.7 percent of observations lie within 3 standard deviations of the mean. Then, based on Chebyshev’s Inequality, even in the case of non-normally distributed data, at least 88.8 percent of cases fall within 3-sigma intervals. Thus, we consider observations beyond 3-sigmas as outliers.
Figure 4 Data after cleaning
In Figure 5, we see that our data is quite normal, centred around 0, most values lying between -2 and 2. This means that the 3-sigma rule will show us more accurate results. You must normalise your data before applying this rule.
Figure 5 Distribution of normalised consumption/s
Results are shown in Figure 6. The reason for such a significant deviation from the average in consumption and duration of certain usages is to be discussed with a domain expert. One instance with very low consumption for a long duration raises particular questions (Figure 7).
Figure 6 3-sigma rule applied to normalised data
Figure 7 Very low consumption for its duration
Advanced data exploration
As previously stated, we are looking to profile asset usages in order to identify abnormal behaviour and therefore, along with duration and resource consumption, we also need to investigate the operational sensor data for each asset. This requires us to define groups of usages that share common characteristics; however, before doing so, we need to select a representative subset of data with the right sensors.
From the preliminary analysis, we observed that the number of sensors can differ between the assets and even between usages for the same asset. Therefore, for later modelling we need to exclusively select usages which always contain the same sensors, i.e. training a model requires vectors of the same length. To achieve this, we can use the following approach, as illustrated in Figure 8.
Figure 8 Selecting sensors
Each asset has a number of sensors that can differ from usage to usage, i.e. some modules can be removed or installed on the asset. Thus, we need to check the presence of these sensors across the whole dataset. Then, we select all usages with sensors that are present above a certain percentage, e.g. 95 percent, in the whole dataset. Let’s assume our dataset contains 17 sensors that are present in 95 percent of all usages. We select these sensors and discard those with lower presence percentages. This way, we create a vector of sensors of length 17. Since we decided to include sensors if they are 95 percent present, a limited number of usages may still be selected although they do not contain some of the selected sensors, i.e. you introduce gaps which are marked in yellow in the figure. To fix these gaps, you can either discard these usages or attribute values for missing sensors. Attributing can be complex, as you need to know what these sensors mean and how they are configured. In our case, these details are anonymised and these usages are consequently discarded. You may need to lower your presence percentage criteria in order to keep a sufficiently representative dataset for further analysis.
After the optimal subset is selected, we check the correlation of the remaining sensors. We do this because we want to remove redundant information and to simplify and speed up our calculations. Plotting a heatmap is a good way of visualising correlation. We do this for the remaining sensors as shown in Figure 9.
Figure 9 Sensor correlation heatmap
In our case, we have 17 sensors from which we select only 7 uncorrelated sensors and plot a scatter matrix, a second visualisation technique which allows us to view more details on the data. Refer to Figure 10.
Figure 10 Scatterplot matrix of uncorrelated sensors
Based on the selected sensors, we now try to characterise different usages for each asset, i.e. we can group usages across the assets based on their sensor values and, in this way, derive a profile for each group. To do this, we first apply hierarchical clustering to group the usages and plot the resulting dendrogram. Hierarchical clustering helps to identify the inner structure of the data and the dendrogram is a binary tree representation of the clustering result. Refer to Figure 11.
Figure 11 Dendrogram
On this graph, below distance 2 we see smaller clusters that are grouping ever closer to each other. Hence, we decide to split the data into 5 different clusters. You can also use silhouette analysis for selecting the best number of clusters.
In order to interpret the clustering, we also want to visualize them, but 7 sensors mean 7 dimensions and because we can’t plot in multidimensional space or it is too complex, we apply Principal Component Analysis or simply PCA in order to reduce the number of dimensions to 2. This allows us to visualize the results of clustering, which is shown on Figure 12. Good clustering means that clusters should be more or less well separated, i.e. similar colours are close to one another or not mixed too much with other colours, and this is what we also see in the figure.
Figure 12 PCA plot
After the clustering is complete, we can characterise usages. This can be done using different strategies. The simple method consists in taking the mean of the sensor values for each cluster (i.e. we calculate a centroid) to define a representative usage.
The last step involves validating the clusters. We can cross-check clustering with the consumption/duration of usages. For instance, we may expect all outliers to fall within one specific cluster, or expect some other more or less obvious patterns, hence rendering our clusters meaningful. In Figure 13 below, we can observe that the 5 clusters, i.e. 5 types of usages, correspond, to an extent but not entirely, to consumption/duration behaviour. We can see purple spots at the bottom and green spots at the top.
Figure 13 Relationship between clusters and consumption/duration
At this stage, some interesting outliers were detected in consumption/duration relationships, which can be stressed with the objectives the assets were used for. We have found clusters that represent typical usages according to data. Result validation can be improved by integrating additional data, such as maintenance data, into analysis. Furthermore, results can be validated and confidently concluded by the domain experts from Ilias Solutions, the industrial partner we are supporting for their data exploitation.
MANTIS; Cyber Physical System based Proactive Collaborative Maintenance.
This project has received funding from the ECSEL Joint Undertaking under grant agreement No 662189. This Joint Undertaking receives support from the European Union’s Horizon 2020 research and innovation programme and Spain, Finland, Denmark, Belgium, Netherlands, Portugal, Italy, Austria, United Kingdom, Hungary, Slovenia, Germany.