An Explainable Artificial Intelligence Approach for Remaining Useful Life Prediction
Article : Articles dans des revues internationales ou nationales avec comité de lecture
: Prognosis and health management depend on sufficient prior knowledge of the degra dation process of critical components to predict the remaining useful life. This task is composed
of two phases: learning and prediction. The first phase uses the available information to learn the system’s behavior. The second phase predicts future behavior based on the available information of the system and estimates its remaining lifetime. Deep learning approaches achieve good prognostic
performance but usually suffer from a high computational load and a lack of interpretability. Complex feature extraction models do not solve this problem, as they lose information in the learning phase and thus have a poor prognosis for the remaining lifetime. A new prepossessing approach is used
with feature clustering to address this issue. It allows for restructuring the data into homogeneous groups strongly related to each other using a simple architecture of the LSTM model. It is advan tageous in terms of learning time and the possibility of using limited computational capabilities.
Then, we focus on the interpretability of deep learning prognosis using Explainable AI to achieve interpretable RUL prediction. The proposed approach offers model improvement and enhanced
interpretability, enabling a better understanding of feature contributions. Experimental results on the available NASA C-MAPSS dataset show the performance of the proposed model compared to other common methods.