Prescriptive Maintenance

1. Product Description

1.1. Solution Overview

The AIDEAS Prescriptive Maintenance (AIPM) is a toolkit for determining the remaining useful life (RUL) of the machine as a whole or of some of its components when it is in working conditions in the factory where it is being used, for determining the risk assesment of an equipment or component, and also to identify maintenance requirements. With these two approaches, RUL calculation and Risk Assesment, the user will be able to, for example:

  • Increase machine useful life due to detecting early failures and allowing planning maintenance actions.

  • Reducing downtimes in machines or equipments, because of predicting failures before they occur.

  • Determining precisely planned maintenance actions when are truly needed.

1.2. Features

This AIDEAS Prescriptive Maintenance offers the following features:

  • Providing the capability of defining the current machine configuration, defining the different components of the machine and its associated process variables, which will help the user contextualizing the obtained results.

  • Providing the capability of importing data from external databases (e.g. MongoDB) and different data sources such as CSV or EXCEL files.

  • Providing the functionality of data validation and pre-processing to ensure that the input data feed to the model is in the correct format.

  • Providing the capability of traning models with different algorithms and different data sources. And also to save them in order to be used later.

  • Obtaining predictions and displaying the results in a friendly way. There are two operating modes: on demand or offline mode and cyclic or online mode.

  • The solution determines the Remaining Useful Life (RUL) or the Risk Assesment of a machine of any of its components.

1.2. Prerequisites

• Technical Specifications

The backend of the AI-PM is developed using python and FLASK as the framework for the API server. The backend provides the API endpoints with which the frontend can communicate to, send requests, and obtain the results. The frontend of the solution is developed in REACT.
For deployment, docker is used since it is the most widely used containerization solution. Docker also makes it easy to deploy the packaged application into the runtime environment and is widely supported by deployment tools and technologies.
For internal storage a MinIO server is used. MinIO is a High-Performance Object Storage. Solution’s outputs are displayed in the UI and also sent to the AIMP.

• Remaining Useful Life

The Remaining Useful Life (RUL) is a key indicator in predictive and prescriptive maintenance that is used to estimate how much time a piece of equipment or component has left before it fails. Different data analysis methods and predictive models are used for this purpose, the most common being: Decision Trees, Random Forest, Support Vector Machines (SVM), Artificial Neural Networks (ANN), Recurrent Neural Networks (RNN), and Long Short-Term Memory (LSTM). The latter is the one initially implemented in the tool. The Long Short-Term Memory (LSTM) is a type of recurrent neural network that has been successfully used in tasks related to time series due to its ability to maintain and manage long-term information, which is important when determining the RUL, since it is based on large amounts of data, generated over time through sensors that monitor variables such as temperature, vibration or pressure, among others. This data is sequential in nature and often has long-term dependencies. LSTM networks are able to remember and use relevant information from long sequences, which makes them particularly useful for predicting when equipment might fail, based on its operating history.

• Risk Assesment

The risk assesment of failure in a given operational context is performed on the basis of historical data, which includes information on previous failures of the equipment, including when they occurred, as well as other related information on the conditions in which the equipment was operating (temperature, vibration, pressure, speed, etc.). By analysing the time between failures and the operating context parameters in which the component or equipment has been operating, the solution assesses the risk of failure of a component as a function of the cumulative operating time (or other significant factor, variable of interest or driver) and the impact of the operating context parameters. For this purpose, a survival analysis is performed using the Cox proportional hazards model. The Cox model is a powerful and widely used tool in statistics and data analysis that allows investigating and modelling the time to an event of interest. The estimation performed by the Cox model is based on the premise that the instantaneous risk of the event occurring (failure rate) for an individual is in a proportional relationship with respect to the covariates (in this case, variables related to the operational context). This means that the covariates affect risk in a constant manner over time. Thus, based on equipment characteristics and a range of values associated with the parameters of the operational context, the Cox model calculates a customised survival function, which shows the risk of failure or survival for each value of the driver or variable of interest.

• Technical Development

This AIDEAS Solution has the following development requirements:

  • Development Language: Python and Javascript.

  • Libraries: Numpy, Pandas, Scikit-Learn, PyTorch, Flask, PyYAML, SciPy, Pickle.

  • Container and Orchestration: Docker, Kubernetes.

  • User Interface: React, PrimeReact, Redux.

  • Application Interfaces: RestAPI.

  • Database engine: MongoDB, MinIO.

• Hardware Requirements

AI-PM can run on any platform that supports Docker containers.

• Software Requirements

• External Dependencies

  • MongoDB (optional, for external data storage)

  • MinIO (for object storage)

2. Installation

2.1. Environment Preparation

Ensure that all dependencies, including Docker, Python, and npm, are installed. Clone the repository from the official GitLab project and configure the backend and frontend environments as needed.

2.2. Step-by-Step Installation Process

  • Local Installation: Requires configuring backend and frontend, installing dependencies, and launching services manually.

  • Docker Installation: Uses a docker-compose.yml file to deploy the application.

  • Kubernetes Installation: Pending implementation.

3. Initial Configuration

3.1. First Steps

• Login

Users must log in using GitLab authentication before accessing secured application features.

Login Screen

• Interface Navigation

The application will open in its home screen. The tabs navigation widget is placed in the left, and the available tabs are:

HOME

  • Dashboard → Tab in which an introduction of AIPM is displayed and from which the other tabs can be accessed too.

    Home Screen

  • Help → Tab with guidelines.

    Help Screen

COMMON

  • Machine Configuration → Tab in which the machine in which the application runs is defined.

    Machine Configuration Screen

  • Data → Tab in which data is imported and visualized in table and graph formats. These options are supported:

    • Data Files RUL calculation: .csv, .xls or .xlsx.

      Data Files Screen 1

    • Data Files Risk Assesment: .csv, .xls or .xlsx.

      Data Files Screen 2

    • Establishing a connection with a MongoDB database and accessing its collections.

      MongoDB Screen

AI-PM RUL Calculation

  • Training → Tab in which models can be trained.

    PM RUL Model Training Screen

  • RUL Calculation → In this tab RUL calculation is performed and its results are visualized.

    PM RUL Model Evaluation Screen

AI-PM Risk Assesment

  • Training → Tab in which models can be trained.

    PM RA Model Training Screen

  • RUL Calculation → In this tab Risk Assesment is performed and its results are visualized.

    PM RA Model Evaluation Screen

The tabs “Machine Configuration”, “Data”, “RUL Calculation” and “Risk Assesment” cannot be accessed if not logged in. The login can be done by clicking in the top-right user button. A GitLab user is needed.


Machine Configuration

In the Machine Configuration screen the following actions can be performed:

Create a new Machine Configuration from scratch

  1. Add Components, Variables (name and ID), and its relation using the widgets under “Machine Configuration” Accordion.
    Create Machine Config File 1

  2. Once everything is defined, click on “Create” to see the hierarchy tree table under “Machine Configuration Overview” accordion tab. Create Machine Config File 2

  3. If needed, modify min, max, unit or description columns of the existing variables in the hierarchy tree table. Create Machine Config File 3

  4. Click on “Export” to save it in MinIO. Create Machine Config File 4

Import an existing Machine Configuration

  1. Drag and drop it under the upload widget or click on “+ Choose”. Multiple files are supported. Import Machine Config File 1

  2. Click “Upload”. Import Machine Config File 2

  • The file will be saved in MinIO.

  • If extension is not .json or if the size exceeds the size limits an error will raise.

Visualize an existing Machine Configuration file

  1. Select a Machine Configuration file from the Machine Configuration files dropdown widget. Visualize Machine Config File 1

  2. Once selected, click on “Select File” to load it. Visualize Machine Config File 2

  3. Once loaded, the Machine Configuration hierarchy tree will be available under “Machine Configuration Overview” accordion tab. Machine’s components, process variables and its relation will also be available under “Machine Configuration” accordion tab.

Edit an existing Machine Configuration file

  1. Follow the steps described in the previous functionality. Visualize Machine Config File 1

  2. Modified the desire parameter, e.g. adding new components or process variable or modifying min, max, unit or description columns of the existing variables in the hierarchy tree table as in the create a new machine configuration functionality. If components, variables or relation between them are added click on “Create” to update the hierarchy tree table.

  3. Click on “Export” to save it in MinIO.

Delete an existing Machine Configuration file

  1. Select a Machine Configuration file from the Machine Configuration files dropdown widget. Delete Machine Config File 1

  2. Once selected, click on “Delete File” to delete it from MinIO. Delete Machine Config File 2

Reset Machine Configuration screen

  • Click on “Reset Screen” button to reset it, located in the top right corner.


Data Files

In the Data Files screen the following actions can be performed:

Import a Data File

  • Drag and drop or click “+ Choose” to upload.

Delete an existing Data File

  • Make sure ”Data Files” is selected in the top left buttons.

  • Select a data file from the data files dropdown widget.

  • Once selected, click on “Delete File” to delete it from MinIO.

Visualize Data File

  • Make sure ”Data Files” is selected in the top left buttons.

  • Select a data file from the dropdown.

    Visualize Data File 1

  • Data file will be shown in table format.

    Visualize Data File 2

Establish a connection to a MongoDB database

  • Make sure ”MongoDB” is selected in the top left buttons.

  • Parametrize the connection by defining: the IP address, the port, the username, the password and the name of the database to connect to.

    Mongo Connection 1

  • Once connected, available collections will be visible.

    Mongo Connection 2

Visualize Collection data

  • Make sure ”MongoDB” is selected in the top left buttons.

  • Once the connection is established select the desired collection from the collection selection dropdown and click on “Select Collection”. Mongo Visualization 1 Mongo Visualization 2

  • Collection data will be shown in table format under “Data Visualization Table” accordion tab. Mongo Visualization 3

  • If variables are not sorted in columns, the following information is needed:

    • First, check the checkbox below “Connect” button. Mongo Visualization 42

    • Define the name of the columns containing the variables names, the values and the index of the time series. Mongo Visualization 5

    • Click on “Select Collection” again.

    • Collection data will be shown in table format under “Data Visualization Table” accordion tab and would not be rearranged but now it is possible to plot the variables.

Visualize Data Chart

  • Independently of the selected source, once data is loaded.

  • Under “Data Visualization Chart” accordion tab, select the desire variable, up to 5, to display from the “Y axis variables” dropdown. X axis variable can be selected too by enabling “Choose X axis variable” checkbox and selecting the variable from the “X axis variable” dropdown. Chart Visualization 1

  • After clicking on “Plot” button a warning popups and after accepting it, a scatter plot will be shown.

    Chart Visualization 2 Chart Visualization 3


3.2. Main Workflows

Training RUL Models

In the Training screen the following actions can be performed:

Train a model

  1. Training is only available with data files. First of all, the user has to select between the existing files under the specific folder inside MinIO. Finally, click on “Select Data” button.

  2. Once the data is loaded the model can be parameterized, specifying the following information: Training 1

  • The variable with the remaining useful life information.

  • The algorithm, only LSTM is available, and its parameters.

  • To enable or disable AI explainability.

  • See an example of parameterization below: Training 2

  1. Finally, click on “Training” to start the training process.

  2. Once finished, a report will be shown in the “Training Results” section. Training 3

Save a trained model

  • Click “Save” to store the model.

  • A model name is suggested but can be edited using the input text above “Save” button. Save Training Results

Reset Training screen

  • Click on “Reset Screen” button to reset it, located in the top right corner.

RUL Calculation

In the Remaining Useful Life Calculation screen the following actions can be performed:

Import an existing model

  • Drag and drop it under the upload widget or click on “+ Choose”. Multiple files are supported.

  • Click on “Upload”.

  • The file will be saved in MinIO.

  • If extension is not .pkl or if the size exceeds the size limits an error will raise.

Delete an existing model

  • Select a model file from the model files dropdown widget.

  • Once selected, click on “Delete Model” to delete it from MinIO.

Perform RUL calculation

  1. First of all, a model has to be selected using the “Model Selection” dropdown widget. Once selected a report of the selected model will be shown in the bottom of the “Model Selection” section.

  2. Then, the data source must be selected using the radio buttons on the top left side of the “Data Source Selection” section, “MongoDB” or “Data Files”. Depending on the selection the dropdown widget will show the existing files under the specifies folder inside MinIO or the available collections. To be able to select “MongoDB” radio button, first the user must be connected to the database, see Establish a connection to a MongoDB database above. Finally, click on “Select Data” button. There are two working modes:

  • Offline or on demand mode.

    • Cyclic results are disabled.

    • Finally, click on “Obtain Results” to perform the RUL Calculation.

    • A report will be shown in the “RUL Calculation Results” section. Offline Results 1 Offline Results 2

  • Online or cyclic mode.

    • Only works if a database is selected as the data source.

    • Cyclic results are enabled.

    • Finally, click on play button to perform the RUL Calculation cyclically. The interval is defined in the config.yml file.

    • Results are shown cyclically. A report will be shown in the “RUL Calculation Results” section. If visualization has been parameterized, it will be updated automatically. Contextualized results in the results overview screen are also updated.

    • Click on stop button stops the operation. Online Results 1 Online Results 2

Visualize the results

  • Under “Data Visualization Chart” accordion tab, the evolution of the remaining useful life through time with two confidence intervals and a graph with different gaussian distributions to show the error in the prediction are shown. The second graph helps the user to understand the errors dispersions and evaluate if the model follows a normal distribution (gaussian distribution around a central value (the mean error). If the model adjusts to a normal distribution, it will mean that the model is prediciting with precision and consistency. Visu RUL Results 1 Visu RUL Results 2

Reset RUL calculation screen

  • Click on “Reset Screen” button to reset it, located in the top right corner.

Training Risk Assesment Models

In the Training screen the following actions can be performed:

Train a model

  1. Training is only available with data files. First of all, the user has to select between the existing files under the specific folder inside MinIO. Finally, click on “Select Data” button.

  2. Once the data is loaded the model can be parameterized, specifying the following information: Training 1

  • The variable to focus on or driver variable.

  • The covariates or variable that characterize the operational context.

  • The event variable, which determines if every single row of data has to be considered an error or not.

  • See an example of parameterization below: Training 2

  1. Finally, click on “Training” to start the training process.

  2. Once finished, a report will be shown in the “Training Results” section. Training 3

Visualize the results

  • Under “Data Visualization Chart” accordion tab, two charts are shown, the covariates chart and the survival chart. The covariates chart shows which are the most dependant variables that affects on the failure. The survival chart is composed by the Kaplan-Meier curve, which describes the survival probability according to the driver variable, and 3 extra curves for the effect of the covariates in the survival probability when their values are in the 20, 50 an 80 percentile. Visu RA Results 1 Visu RA Results 2

Save a trained model

  • Click “Save” to store the model.

  • A model name is suggested but can be edited using the input text above “Save” button. Save Training Results

Reset Training screen

  • Click on “Reset Screen” button to reset it, located in the top right corner.

Risk Assesment

In the Risk Assesment screen the following actions can be performed:

Import an existing model

  • Drag and drop it under the upload widget or click on “+ Choose”. Multiple files are supported.

  • Click on “Upload”.

  • The file will be saved in MinIO.

  • If extension is not .pkl or if the size exceeds the size limits an error will raise.

Delete an existing model

  • Select a model file from the model files dropdown widget.

  • Once selected, click on “Delete Model” to delete it from MinIO.

Perform Risk Assesment

  1. First of all, a model has to be selected using the “Model Selection” dropdown widget. Once selected a report of the selected model will be shown in the bottom of the “Model Selection” section.

  2. Then, the data source must be selected using the radio buttons on the top left side of the “Data Source Selection” section, “MongoDB” or “Data Files”. Depending on the selection the dropdown widget will show the existing files under the specifies folder inside MinIO or the available collections. To be able to select “MongoDB” radio button, first the user must be connected to the database, see Establish a connection to a MongoDB database above. Finally, click on “Select Data” button. There are two working modes:

  • Offline or on demand mode.

    • Cyclic results are disabled.

    • Finally, click on “Obtain Results” to perform the Risk Assesment Calculation.

    • A report will be shown in the “Risk Assessment calculation - Results” section. Offline Results 1 Offline Results 2

  • Online or cyclic mode.

    • Only works if a database is selected as the data source.

    • Cyclic results are enabled.

    • Finally, click on play button to perform the Risk Assesment calculation cyclically. The interval is defined in the config.yml file.

    • Results are shown cyclically. A report will be shown in the “Risk Assessment calculation - Results” section. If visualization has been parameterized, it will be updated automatically. Contextualized results in the results overview screen are also updated.

    • Click on stop button stops the operation. Online Results 1 Online Results 2

Visualize the results

  • Under “Data Visualization Chart” accordion tab, the survival chart is shown, it is composed by the Kaplan-Meier curve, which describes the survival probability according to the driver variable, an extra curve with the survival probability for the given operational context and a vertical line for the corresponding driver value. The intersection between the survival probability for the given operational context and the vertical line indicates the survival probabilty for the given driver. Visu RA Results 1

Reset Risk Assesment screen

  • Click on “Reset Screen” button to reset it, located in the top right corner.

4. General Queries

4.1. Installation and Configuration Contact (If Service Provided)

For installation and configuration support, users should refer to the official GitLab project or the associated organization (IKERLAN).

4.2. Support

Company

Website

Logo

IKERLAN

https://www.ikerlan.es/

Ikerlan logo

4.3. Licensing

  • The solution is licensed under AGPLv3 or PRIVATE licensing models.

  • Pricing and licensing details are available upon request.

Subject

Value

Payment Model

Quotation under request

Price

Quotation under request

5. User Manual

5.1. Glossary of Terms

– COMPLETE –

5.2. API Documentation

By default, the backend server is served on port 5000 and allows the following API methods. These methods are accessible through the application frontend, or by sending the proper request using tools like Postman, or directly with Python code.

Resource

GET

POST

PUT

DELETE

/machineConfig/<user_id>

Supported

Supported

Supported

/machineConfig/<user_id>/upload_machineConfig

Supported

Supported

/dataFiles/<user_id>

Supported

Supported

/dataFiles/<user_id>/upload_dataFile

Supported

Supported

/dataFilesRISKA/<user_id>

Supported

Supported

/dataFilesRISKA/<user_id>/upload_dataFile

Supported

Supported

/dataFilesContextRISKA/<user_id>

Supported

Supported

/dataFilesContextRISKA/<user_id>/upload_dataFile

Supported

Supported

/dataMongo

Supported

Supported

/dataMongo/connection

Supported

/algorithmListPM_RULC

Supported

Supported

/trainingPM_RULC

Supported

/trainingPM_RULC/saveModel

Supported

/modelsRULC/<user_id>

Supported

Supported

Supported

/prescriptiveMaintenanceRULC/model

Supported

/prescriptiveMaintenanceRULC

Supported

/trainingPM_RISKA

Supported

/trainingPM_RISKA/saveModel

Supported

/modelsRISKA/<user_id>

Supported

Supported

Supported

/prescriptiveMaintenanceRISKA/model

Supported

/prescriptiveMaintenanceRISKA

Supported

/startJob

Supported

Supported

/stopJob

Supported

/jobResults

Supported

5.3 Machine Configuration

/machineConfig/<user_id>

  • GET → Get a list of the Machine Configuration file names existing under MinIO folder, which is specified in config.yml.

  • POST → Sends the selected Machine Configuration filename to the backend.

  • PUT → Sends the defined Machine Configuration to be stored as a JSON file under MinIO.

    GET response type:
    [{    
      "id": 0,
      "name": "myFileName.json"
    }, {}, {}]
    
    POST request type:
    {    
      "username": "ikerlan",
      "filename": "lookUpTable.json"
    }
    POST response type:
    {
      "machine": "myMachine",
      "components": [],
      "variables": [],
      "componentDependantVariables": []
    }
    
    PUT request type:
    {    
      "username": "ikerlan",
      "filename": "fileName.json",
      "machineTree": {} 
    }
    

/machineConfig/<user_id>/upload_machineConfig

  • POST → Stores a list of Machine Configuration files (machineConfig[]) in JSON format under MinIO.

  • DELETE → Deletes the selected Machine Configuration file under MinIO.

    DELETE request type:
    {    
      "username": "ikerlan",
      "filename": "fileName.json"
    }
    

5.4 Data Files

/dataFiles/<user_id>, /dataFilesRISKA/<user_id> and /dataFilesContextRISKA/<user_id>/upload_dataFile

  • GET → Get a list of data file names existing under MinIO.

  • POST → Given a data file name, returns the column names and the data to be visualized.

    • Obtain column names

    • Obtain data file rows (given the number of rows to get and the starting row)

    • Obtain data file columns (given the column names to get)

    • Obtain data (given the number of rows to get)

      GET response type:
      [{    
        "id": 0,
        "name": "myFileName.csv"
      }, {}, {}, ...]
      
      POST obtain column names 
      Request type:
      {    
        "username": "ikerlan",
        "filename": "myFileName.csv",
        "colNames": true
      }
      Response type:
      {
        "filename": "myFileName.csv",
        "totalRows": int,
        "colNames": []
      }
      
      POST obtain data file rows
      Request type:
      {    
        "username": "ikerlan",
        "filename": "myFileName.csv",
        "nRows": int,
        "startingRow": int
      }
      Response type:
      {
        "filename": "myFileName.csv",
        "dfDict": {}
      }
      
      POST obtain data file columns
      Request type:
      {    
        "username": "ikerlan",
        "filename": "myFileName.csv",
        "xVar": "xVar",
        "yVars": []
      }
      Response type:
      {
        "xVarName": "xVar",
        "xVarValues": [],
        "yVarsNames": ["yVar", "yVar1", ...],
        "yVarsValues": [[],[],...]
      }
      
      POST obtain data
      Request type:
      {    
        "username": "ikerlan",
        "filename": "myFileName.csv",
        "nRows": int,
      }
      Response type:
      {
        "filename": "myFileName.csv",
        "totalRows": int,
        "colNames": [],
        "dfDict": {}
      }
      

/dataFiles/<user_id>/upload_dataFile, /dataFilesRISKA/<user_id>/upload_dataFile and /dataFilesContextRISKA/<user_id>/upload_dataFile

  • POST → Stores a list of data files (dataFile[]) in .csv, .xls, or .xlsx formats under MinIO.

  • DELETE → Deletes the selected data file under MinIO.

    DELETE request type:
    {    
      "username": "ikerlan",
      "filename": "dataFile.csv"
    }
    

5.5 MongoDB Data

/dataMongo

  • GET → Get a list of MongoDB collections in the established DB connections.

  • POST → Given a collection name, returns the column names and the data to be visualized.

    • Obtain collection data

    • Obtain collection data → given the name of the columns where the variable names, the variable values and the collection index (Time variable) are.

    • Obtain collection data by columns → given the column names to get.

      POST obtain column names 
      Request type:
      {    
        "collection": "collectionName",
        "nRows": int,
        "selectedColVarNames": "",
        "selectedColValues": "",
        "selectedColIndex": ""
      }
      Response type:
      {
        "collection": "collectionName",
        "totalRows": int,
        "colNames": [],
        "dfDict": {},
        "varsInCol": []
      }
      
      POST obtain column names 
      Request type:
      {    
        "collection": "collectionName",
        "nRows": int,
        "startingRow": int,
      }
      Response type:
      {
        "collection": "collectionName",
        "totalRows": int,
        "colNames": [],
        "dfDict": {}
      }
      
      POST obtain column names 
      Request type:
      {    
        "collection": "collectionName",
        "xVar": "",
        "yVars": [],
        "selectedColVarNames": "",
        "selectedColValues": "",
        "selectedColIndex": "",
      }
      Response type:
      {
        "xVarName": "xVar",
        "xVarValues": [],
        "yVarsNames": ["yVar", "yVar1", ...],
        "yVarsValues": [[],[],...]
      }
      

/dataMongo/connection

  • POST → Establishes a connection with a MongoDB database given a set of connection parameters.

    POST request type:
    {    
      "user": "",
      "password": "",
      "ip": "",
      "port": int,
      "dbName": ""
    }
    

5.6 Prescriptive Maintenance Resources

/algorithmListPM

  • GET → Get a list of available algorithms to train models with.

  • POST → Given an algorithm, returns the list of parameters and their default values.

    GET response type:
    [{
      "name": "algorithm"
    }, {}, {}, ...]
    
    POST obtain column names 
    Request type:
    {    
      "selectedAlgorithm": "algorithm"
    }
    Response type:
    {
      "algorithmParamsList": ["parameter1":{
        "description": "", 
        "dataType": "", 
        "rangeMin": "", 
        "rangeMax": int, 
        "defaultValue": int}, {}, {}, ...]
    }
    

5.7 Training RUL Models

/trainingPM_RULC

  • POST → Sends training parameters and obtains information about the trained model.

    POST request type:
    {    
      "username": "ikerlan",
      "selectedDataFiles": [],
      "selectedRULVar": "",
      "selectedAlgorithm": "",
      "selectedAlgorithm": {},
      "explainability": bool
    }
    POST response type:
    {
      "trainingReport": "",
      "modelName": ""
    }
    

/trainingPM_RULC/saveModel

  • POST → Stores the trained model in .pkl under MinIO folder, which is specified in config.yml file.

    POST request type:
    {    
      "username": "ikerlan",
      "modelName": "myModel.pkl",
    }
    

5.8 Prescriptive Maintenance RUL, testing models

/modelsRULC/<user_id>

  • GET → Gets the list of Model files under MinIO folder, which is specified in config.yml file.

  • POST → Stores a list of Model files, modelFile[], in .pkl format under MinIO folder, which is specified in config.yml file.

  • DELETE → Deletes the selected Model file under MinIO folder, which is specified in config.yml file.

    GET response type:
    [{    
      "id": 0,
      "name": "myFileName.pkl"
    }, {}, {}]
    
    DELETE request type:
    {    
      "username": "ikerlan",
      "filename": "model.pkl"
    }
    

/prescriptiveMaintenanceRULC/model

  • POST → Sends the selected model and get the model information and the training results.

    POST request type:
    {    
      "username": "ikerlan",
      "selectedModel": "model.pkl"
    }
    POST response type:
    {
      "trainingReport": "",
      "trainingParams": {},
      "modelName": ""
    }
    

/prescriptiveMaintenanceRULC

  • POST → Get the results from performing RUL calculation, given a model and a dataset.

    POST request type:
    {    
      "username": "ikerlan",
      "selectedDataFile": "myFile.csv",
      "selectedCollection": "myCollection",
      "selectedModel": "myModel.pkl",
    }
    POST response type:
    {
      "prescriptiveMaintenanceReport": "", 
      "data": {"xData": [], "data_mean_pred": [], "data_real": [], "data_pred_lower": [], "data_pred_upper": [], "xValuesGaussianPlots": [], "yValuesGaussianPlots": [] }
    }
    

5.9 Training Risk Assesment Models

/trainingPM_RISKA

  • POST → Sends training parameters and obtains information about the trained model.

    POST request type:
    {    
      "username": "ikerlan",
      "selectedDataFiles": [],
      "selectedDriverVarRISKA": "",
      "selectedCovariablesVarsRISKA": [],
      "selectedEventVarRISKA": ""
    }
    POST response type:
    {
      "trainingReport": "",
      "covariablesTableReport": "",
      "trainingResultsData": 
      {
        "xCovariables": [], "yCovariables": [], "xSurvivalKaplanMeier": [], "ySurvivalKaplanMeier": [], 
        "confIntUpperKaplanMeier": [], "confIntLowerKaplanMeier": [], 
        "xSurvivalTimePoints": [], "ySurvivalPercentile20": [], "ySurvivalPercentile50": [], "ySurvivalPercentile80": [], 
      },
      "modelName": ""
    }
    

/trainingPM_RISKA/saveModel

  • POST → Stores the trained model in .pkl under MinIO folder, which is specified in config.yml file.

    POST request type:
    {    
      "username": "ikerlan",
      "modelName": "myModel.pkl",
    }
    

5.10 Prescriptive Maintenance Risk Assesment, testing models

/modelsRISKA/<user_id>

  • GET → Gets the list of Model files under MinIO folder, which is specified in config.yml file.

  • POST → Stores a list of Model files, modelFile[], in .pkl format under MinIO folder, which is specified in config.yml file.

  • DELETE → Deletes the selected Model file under MinIO folder, which is specified in config.yml file.

    GET response type:
    [{    
      "id": 0,
      "name": "myFileName.pkl"
    }, {}, {}]
    
    
    ```json
    DELETE request type:
    {    
      "username": "ikerlan",
      "filename": "model.pkl"
    }
    

/prescriptiveMaintenanceRISKA/model

  • POST → Sends the selected model and get the model information and the training results.

    POST request type:
    {    
      "username": "ikerlan",
      "selectedModel": "model.pkl"
    }
    POST response type:
    {
      "modelName": "",
      "trainingReport": "",
      "covariablesTableReport": "",
      "xCovariables": [],
      "yCovariables": [],
      "xSurvivalKaplanMeier": [],
      "ySurvivalKaplanMeier": [],
      "confIntUpperKaplanMeier": [],
      "confIntLowerKaplanMeier": [],
      "xSurvivalTimePoints": [],
      "ySurvivalPercentile20": [],
      "ySurvivalPercentile50": [],
      "ySurvivalPercentile80": [],
    }
    

/prescriptiveMaintenanceRISKA

  • POST → Get the results from performing Risk Assesment calculation, given a model and a dataset.

    POST request type:
    {    
      "username": "ikerlan",
      "selectedModel": "myModel.pkl",
      "selectedContextData": "myFile.csv",
      "selectedCollection": "myCollection",
      "selectedDriverValue":double
      
    }
    POST response type:
    {
        "resultsReport": "", 
        "resultsData": 
          {
            "xSurvivalKaplanMeier": [],
            "ySurvivalKaplanMeier": [],
            "confIntUpperKaplanMeier": [],
            "confIntLowerKaplanMeier": [],
            "xSurvivalTimePoints": [],
            "ySurvivalProbabilities": []
          },
        "driverValue": double,
        "driverValueProbability": double
    }
    

5.9 Cyclic Mode

5.9.1 RUL Calculation

/startJob
  • GET → Configures, add the cyclic task to the queue and starts the job. Sockets are used to update the UI every time a new results is obtained. If the cyclic task has been stopped, the job is resumed.

  • POST → Sends the necessary data to obtain the results from such as: the machine configuration file, the collection where data is gathered from, the model which will be evaluated and so on.

    GET response type:
    {    
      "message": ""
    }
    
    POST obtain column names 
    Request type:
    {    
      "username": "ikerlan",
      "selectedCollection": "myCollection",
      "connectionParams": {},
      "selectedModel": "myModel.pkl"
    }
    Response type:
    {
      "message": "",
    }
    
/stopJob
  • POST → Stops the cyclic task. If a job has already been scheduled, it will be proccesed.

    GET response type:
    {    
      "message": ""
    }
    

5.9.12 Risk Assesment Calculation

/startJobRISKA
  • GET → Configures, add the cyclic task to the queue and starts the job. Sockets are used to update the UI every time a new results is obtained. If the cyclic task has been stopped, the job is resumed.

  • POST → Sends the necessary data to obtain the results from such as: the machine configuration file, the collection where data is gathered from, the model which will be evaluated and so on.

    GET response type:
    {    
      "message": ""
    }
    
    POST obtain column names 
    Request type:
    {    
      "username": "ikerlan",
      "selectedCollection": "myCollection",
      "connectionParams": {},
      "selectedModel": "myModel.pkl"
    }
    Response type:
    {
      "message": "",
    }
    
/stopJobRISKA
  • POST → Stops the cyclic task. If a job has already been scheduled, it will be proccesed.

    GET response type:
    {    
      "message": ""
    }
    

5.10 Sockets message (only in Cyclic Mode)

5.10.1 RUL Calculation

Sockets routine is handled by my_scheduled_results() function in socket_server.py. This function is executed cyclically as defined when calling /startJob method and performs the following operations

/startJob
  • Gets the data interval to get the data from the database. During the first run, current datetime is taken as the last time and one month prior as the first time to considered, both datetimes in UTC+0. In the next runs these values are updated, e.g. second run would be from current datetime to current datetime plus the interval defined.

  • Data is read from the database.

  • Results are obtained.

  • The following messages are sent:

    • Ok message:

    {
      "status": "ok",
      "message": f"Results obtained.\nFrom: {iniTS_string}, To: {endTS_string}\n{list(report.values())[-1]}",
      "pm_rul_results": {
        "prescriptiveMaintenanceReport": "", "dataVariables": [], "outliersTableValues": [], "outliersTableColNames": [],
        "data": {"xData": [], "data_mean_pred": [], "data_pred_lower": [], "data_pred_upper": [], "xValuesGaussianPlots": [], "yValuesGaussianPlots": []}
      }
    }
    
    • Warning message, if there is no data:

    {
      "status": "warn",
      "message": f"No data found for the current time period.\nFrom: {iniTS_string}, To: {endTS_string}"
    }
    
    • Error message, if an error or exception happened:

    {
      "status": "error",
      "message": f"Error getting data!\n{ msg['message']},\nFrom: {iniTS_string}, To: {endTS_string}"
    }
    {
      "status": "error",
      "message": f"Error getting results!\n{ msg['message']},\nFrom: {iniTS_string}, To: {endTS_string}"
    }
    {
      "status": "error",
      "message": f"Exception happened while obtaining cyclic results.\nException: {str(e)},\nFrom: {iniTS_string}, To: {endTS_string}"
    }
    

5.10.2 Risk Assesment Calculation

Sockets routine is handled by my_scheduled_results_riskA() function in socket_server.py. This function is executed cyclically as defined when calling /startJobRISKA method and performs the following operations

/startJobRISKA
  • Gets the data interval to get the data from the database. During the first run, current datetime is taken as the last time and one month prior as the first time to considered, both datetimes in UTC+0. In the next runs these values are updated, e.g. second run would be from current datetime to current datetime plus the interval defined.

  • Data is read from the database.

  • Results are obtained.

  • The following messages are sent:

    • Ok message:

    {
      "status": "ok",
      "message": f"Results obtained.\nFrom: {iniTS_string}, To: {endTS_string}\n{list(report.values())[-1]}",
      "pm_riskA_results": {
        "resultsReport": "", 
        "resultsData": {
          "xSurvivalKaplanMeier": [], "ySurvivalKaplanMeier": [], 
          "confIntUpperKaplanMeier": [], "confIntLowerKaplanMeier": [], 
          "xSurvivalTimePoints": [], "ySurvivalProbabilities": []
        }, 
        "driverValue": "int", "driverValueProbability": "float"
      }
    }
    
    • Warning message, if there is no data:

    {
      "status": "warn",
      "message": f"No data found for the current time period.\nFrom: {iniTS_string}, To: {endTS_string}"
    }
    
    • Error message, if an error or exception happened:

    {
      "status": "error",
      "message": f"Error getting data!\n{ msg['message']},\nFrom: {iniTS_string}, To: {endTS_string}"
    }
    {
      "status": "error",
      "message": f"Error getting results!\n{ msg['message']},\nFrom: {iniTS_string}, To: {endTS_string}"
    }
    {
      "status": "error",
      "message": f"Exception happened while obtaining cyclic results.\nException: {str(e)},\nFrom: {iniTS_string}, To: {endTS_string}"
    }
    

5.11. Console Commands List

  • npm install for frontend dependencies.

  • pip install -r dev.txt for backend dependencies.

  • docker-compose up --build for Docker-based deployment.

  • python server.py to launch the backend server.

  • npm run dev to start the frontend server.