• Article
  • Ingénierie & Outils numériques

Explainable Remaining Useful Life Prediction Using Interpretable Divisive Feature Clustering

Article : Articles dans des revues internationales ou nationales avec comité de lecture

Accurate prediction of remaining useful life (RUL) is critical for effective predictive maintenance. While models like long
short-term memory (LSTM) are effective, they often lack interpretability, even when using explainable artificial intelligence
(XAI) methods such as shapley additive explanations (SHAP). This is particularly true when these models are trained on
high-dimensional, redundant features. To tackle this issue, we introduce the interpretable divisive feature clustering (IDFC)
algorithm. This unsupervised dimensionality reduction method combines the advantages of divisive clustering and -means-like
clustering to group highly correlated features into unidimensional clusters. Additionally, it selects representative features to
maintain semantic meaning. By doing so, the reliability of post hoc explanation methods like SHAP is improved by reducing
multicollinearity. When IDFC is combined with a one-layer LSTMmodel on the C-MAPSS dataset, it achieves competitive remaining
useful life (RUL) prediction performance with significantly fewer features. This leads to a lower prediction error compared to
principal component analysis (PCA)-based approaches.Additionally, the quality of explanations provided by SHAP, assessed using
several functionally grounded metrics such as coherence, stability, and acumen, is enhanced when IDFC is applied, as opposed to
using all features. Finally, utilizing IDFC reduces explanation time by 41% compared to the baseline model that includes all features.
These findings confirm that IDFC improves both the predictive accuracy and explanatory power of deep models in complex
industrial environments.