No items found.

Designing a predictive maintenance solution for IoT

Designing a predictive maintenance solution for IoTDesigning a predictive maintenance solution for IoT

What is predictive maintenance?

Predictive Maintenance is a form of data analysis that uses various IoT devices to assess patterns in machinery that may represent potential problems. Assessing the stages of deteriorating machinery predictive maintenance lets you predict the life cycle of machinery and catch failure before it happens.

Predictive vs preventive maintenance

Predictive maintenance is predicting when a machine will break down and using that information to schedule maintenance in a way that is more efficient for your technician. An example of predictive maintenance is when you use sensor data or engine data to estimate when a machine failure is going to happen. 

Preventative maintenance is a maintenance schedule based on a regular and pre-determined cadence - e.g., monthly, annually, or based off of runtime on a machine. An example of preventative maintenance is when you change the oil on your car after ten thousand miles.

Why predictive maintenance is important

Predictive maintenance does is attempt to reduce the frequency of maintenance visits by accurately defining the current state of machinery using various techniques. 

IoT for predictive maintenance

By employing IoT devices, predictive maintenance techniques will assess large amounts of data generated by these devices. Then, AI and Machine learning models will churn through the data in order to provide predictions on the state of the machinery. Some of the various IoT devices used for predictive maintenance include, vibration sensors, microphones, thermal sensors, infrared and ultrasonic sensors.

Machine Learning Based Unbalance Detection of a Rotating Shaft Using Vibration Data

This tutorial is adapted from the Machine Learning-Based Unbalance Detection of a Rotating Shaft Using Vibration Data paper submitted by Mey, O.; Neudeck, W.; Schneider, A.; Enge-Rosenblatt, O. to the 25th IEEE International Conference on Emerging Technologies and Factory Automation, ETFA 2020.

We will attempt to detect unbalance on a rotating shaft based on vibration data. We will train and evaluate a fully-connected neural network on data that has undergone a Fast Fourier Transform (FFT) in Collimator.

How the data is collected

The setup of the simulation is as follows. A DC motor is controlled by a motor controller which in turn is hooked to a mass of varying sizes. The three vibration sensors are placed in close proximity to the shaft. The model is shown below:

Summary of unbalance detection setup

Reading Measurement Data

The entire dataset we will work with is freely available via the Fraunhofer Fortadis data space.

In total, datasets for 4 different unbalance strengths/weights were recorded (1D/1E ... 4D/4E). As well as one dataset with the unbalance holder without additional weight (i.e. without unbalance, 0D/0E). in this case D corresponds with the development or training set and E corresponding with the evaluation set. The unbalance weights are shown in the table below:

\begin{array} {|r|r|}\hline ID & Radius[mm] & Mass[g] \\ \hline 0D/ 0E & - & - \\ \hline 1D/ 1E & 14 ± 0.1 & 3.281 ± 0.003 \\ \hline 2D/ 2E & 18.5 ± 0.1 & 3.281 ± 0.003 \\ \hline 3D/ 3E & 23 ± 0.1 & 3.281 ± 0.003 \\ \hline 4D/ 4E & 23 ± 0.1 & 6.614 ± 0.007 \\ \hline  \end{array}

We will begin by importing the libraries we will work with:

import pandas as pd
from matplotlib import pyplot as plt
import numpy as np
import tensorflow as tf
import zipfile
from sklearn.preprocessing import RobustScaler

After importing our libraries we will unzip the data obtained from the Fraunhofer Fortadis data space:

url = '../data/fraunhofer_eas_dataset_for_unbalance_detection_v1.zip'

use_reference_models = False 

model_path = '../models'


Next we will assign the CSV files to variables for processing.

with zipfile.ZipFile(url, 'r') as f:
    with f.open('0D.csv', 'r') as c:
        data0D = pd.read_csv(c)
    with f.open('0E.csv', 'r') as c:
        data0E = pd.read_csv(c)
    with f.open('1D.csv', 'r') as c:
        data1D = pd.read_csv(c)
    with f.open('1E.csv', 'r') as c:
        data1E = pd.read_csv(c)
    with f.open('2D.csv', 'r') as c:
        data2D = pd.read_csv(c)
    with f.open('2E.csv', 'r') as c:
        data2E = pd.read_csv(c)
    with f.open('3D.csv', 'r') as c:
        data3D = pd.read_csv(c)
    with f.open('3E.csv', 'r') as c:
        data3E = pd.read_csv(c)
    with f.open('4D.csv', 'r') as c:
        data4D = pd.read_csv(c)
    with f.open('4E.csv', 'r') as c:
        data4E = pd.read_csv(c)

Next we apply labels and define our samples per seconds and seconds for each analysis. A window of data of which the neural network will make predictions on is defined by the product of the samples per second and the seconds per analysis which are 4096 and 1 respectively.

labels = {'no_unbalance':0, 'unbalance':1}
sensor = 'Vibration_1'
samples_per_second = 4096
seconds_per_analysis = 1.0
window = int(samples_per_second*seconds_per_analysis)

We create a function to assign the labels to the data for testing. I.e. assigning unbalance (1) vs no unbalance (0) to each data row.

def get_features(data, label):
    n = int(np.floor(len(data)/window))
    data = data[:int(n)*window]
    X = data.values.reshape((n, window))
    y = np.ones(n)*labels[label]
    return X,y

Next we run through each development data file and each evaluation data file to create two big data files that have the correct unbalance.

X0,y0 = get_features(data0D[sensor], "no_unbalance")
X1,y1 = get_features(data1D[sensor], "unbalance")
X2,y2 = get_features(data2D[sensor], "unbalance")
X3,y3 = get_features(data3D[sensor], "unbalance")
X4,y4 = get_features(data4D[sensor], "unbalance")
X=np.concatenate([X0, X1, X2, X3, X4])
y=np.concatenate([y0, y1, y2, y3, y4])

X0_val, y0_val = get_features(data0E[sensor], "no_unbalance")
X1_val, y1_val = get_features(data1E[sensor], "unbalance")
X2_val, y2_val = get_features(data2E[sensor], "unbalance")
X3_val, y3_val = get_features(data3E[sensor], "unbalance")
X4_val, y4_val = get_features(data4E[sensor], "unbalance")
X_val=np.concatenate([X0_val, X1_val, X2_val, X3_val, X4_val])
y_val=np.concatenate([y0_val, y1_val, y2_val, y3_val, y4_val])

Now the dataset for training X contains 32226 samples with 4096 values each as well as the associated label information y with 32226 labels (one label per sample). The dataset for validating the trained model X_val contains 8420 samples plus the labels y_val accordingly.

print(X.shape, y.shape, X_val.shape, y_val.shape)

(32226, 4096) (32226,) (8420, 4096) (8420,)

Train-Test-Split

Next we will perform a Train test split of the dataset by using Sklearn. The result will be a set for training and a set for testing.

from sklearn.model_selection import train_test_split
train_test_ratio = 0.9
X_train, X_test, y_train, y_test = train_test_split(X,y, test_size = 1-train_test_ratio, random_state = 0)

print(X_train.shape, y_train.shape, X_test.shape, y_test.shape)

(29003, 4096) (29003,) (3223, 4096) (3223,)

Preprocess Data Using FFT

Next, we will employ Numpy's Fast Fourier Transformation (FFT) process to return each window's Discrete Fourier Transform.

X_fft = np.abs(np.fft.rfft(X, axis=1))[:,:int(window/2)]
X_train_fft = np.abs(np.fft.rfft(X_train, axis=1))[:,:int(window/2)]
X_test_fft = np.abs(np.fft.rfft(X_test, axis=1))[:,:int(window/2)]
X_val_fft = np.abs(np.fft.rfft(X_val, axis=1))[:,:int(window/2)]

X_fft[:,0]=0
X_train_fft[:,0]=0
X_test_fft[:,0]=0
X_val_fft[:,0]=0

print(X_train_fft.shape, X_test_fft.shape, X_val_fft.shape)

(29003, 2048) (3223, 2048) (8420, 2048)

Scaling

Sklearns Preprocessing library has a Robust Scaler function which works to standardize a dataset by handling outliers properly. Here we eliminate the outliers that are below the 5th quantile and above the 95th quantile range. This helps curb the effect of outliers on skewing the prediction of the neural network.

scaler = RobustScaler(quantile_range=(5,95)).fit(X_train_fft)

X_fft_sc = scaler.transform(X_fft)
X_train_fft_sc = scaler.transform(X_train_fft)
X_test_fft_sc = scaler.transform(X_test_fft)
X_val_fft_sc = scaler.transform(X_val_fft)

Fully-Connected Neural Network (FCN)

Evaluation of Model performance depending on number of layers

The initial submission of the paper submission to the 25th IEEE International Conference on Emerging Technologies and Factory Automation, ETFA 2020 tested neural networks with hidden layers ranging from zero to four. What it concluded was that the 4th layer neural networks had the most accuracy at 98.1%

image.png
Neural network accuracy rate vs number of hidden layers

We're now going to train and test and evaluate a 4 layer fully connected neural network on Collimator. We'll begin by defining our Tensorflow neural network, run at 100 epochs and using 4 hidden layers. Then we will use the Tensorflow's fit function to train and validate our model.

from tensorflow.keras.models import Sequential, load_model, Model
from tensorflow.keras.layers import BatchNormalization,LeakyReLU,Dense,Dropout
from tensorflow.keras.layers import Input,Conv1D,MaxPooling1D,Flatten,ReLU
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.callbacks import ModelCheckpoint
from tensorflow.keras.regularizers import l1_l2

weight_for_0 = len(y)/(2*len(y[y==0]))
weight_for_1 = len(y)/(2*len(y[y==1]))
class_weight = {0: weight_for_0, 1: weight_for_1}

epochs = 100

X_in = Input(shape=(X_train_fft.shape[1],), name="cam_layer")
x = X_in
#the 4 hidden layers
for j in range(5):
    x = Dense(units = 1024, activation="linear")(x)
    x = LeakyReLU(alpha=0.05)(x)
X_out = Dense(units = 1, activation = 'sigmoid')(x)
model_i = Model(X_in, X_out)


best_model_filepath = f"{model_path}/fft_fcn_5_layers.h5"
checkpoint = ModelCheckpoint(best_model_filepath, monitor='val_loss', 
                             verbose=1, save_best_only=True, mode='min')

model_i.compile(optimizer = Adam(learning_rate=0.0005), loss = 'binary_crossentropy', 
                metrics = ['accuracy'])
model_i.summary()


Fully-connected neural network model parameters

Predictive maintenance system results

model_i.fit(X_train_fft_sc, y_train, epochs = 100, batch_size = 128,
               validation_data=(X_test_fft_sc, y_test), callbacks=[checkpoint], 
            class_weight=class_weight)

Neural network results

At this point we can see that our model has a predictive accuracy of 99% on the training set. The next step will be to run the model against our evaluation data and see how it fairs.

from tensorflow.keras.models import load_model

best_model_filepath = f"{model_path}/fft_fcn_5_layers.h5"
model_i = load_model(best_model_filepath)
#train_acc_ges = model_i.evaluate(X_train_fft_sc, y_train)
#test_acc_ges = model_i.evaluate(X_test_fft_sc, y_test)
test_results = model_i(X_val_fft_sc,training=False)
val_acc_ges = model_i.evaluate(X_val_fft_sc, y_val)
results_to_pandas = pd.DataFrame(test_results[0])
test_to_pandas = pd.DataFrame(y_val)

Model accuracy against evaluation data set

Our model has an accuracy of 97% on data the evaluation data. We can then compare the neural networks guesses with the actual guesses and pinpoint windows that the next struggled with for future testing.

temp1 = np.array(test_results)
temp2 = temp1.transpose()
finalTest = temp2[0]
print(pd.DataFrame({"Results": finalTest,
       "Tests": y_val}))

Comparing the neural network guess vs the actual unbalances
Try it in Collimator