Improving predictive maintenance: Evaluating the impact of preprocessing andmodel complexity on the effectiveness of eXplainable Artificial Intelligence methods
Auteurs : Mouhamadou-Lamine Ndao (LINEACT), Genane Youness (LINEACT), Ndeye Niang, Gilbert Saporta (CNAM)
Article : Articles dans des revues internationales ou nationales avec comité de lecture - 15/03/2025 - Engineering Applications of Artificial Intelligence
Due to their performance in this field, Long-Short-Term Memory Neural Network (LSTM) approaches are often
used to predict the remaining useful life (RUL). However, their complexity limits the interpretability of their
results. So, eXplainable Artificial Intelligence (XAI) methods are used to understand the relationship between
the input data and the predicted RUL. Modeling involves making choices, such as preprocessing strategies or
model complexity. Understanding how these modeling choices affect the effectiveness of XAI methods is crucial.
This paper investigates the impact of two modeling aspects: preprocessing multivariate time series and model
complexity, precisely the number of hidden layers, on the quality of the explanations provided by three XAI
post-hoc local agnostic methods (Local Interpretable Model-Agnostic Explanations (LIME), SHapley Additive
exPlanations (SHAP), and Learning to eXplain (L2X) in the context of the RUL prediction. The quality of the
XAI methods is evaluated using eleven metrics, categorized under five properties based on the definitions
of interpretability and explainability. Experiments on the C-MAPSS dataset for aero-engine prognostics
demonstrate that SHAP often provides better explanations when optimized preprocessing parameters are used.
However, variations in these preprocessing parameters affect the quality of the explanation. Additionally, the
results suggest no significant correlation between the complexity of the LSTM model and explanation quality,
although changes in the number of layers notably influence the precision of SHAP’s explanations.