Explainable AI: Why Post-Hoc Rationalizations Are Not Reasoning
The model made a decision. The explanation tells you a plausible story about why. These are not the same thing.
Explainable AI produces persuasive narratives that differ from actual model reasoning. SHAP and LIME generate different explanations for the same prediction. Regulatory compliance demands explanations. XAI provides rationalizations of unknown accuracy.