Explanation Fidelity Evaluation
6 papers with code • 6 benchmarks • 6 datasets
Evaluation of explanation fidelity with respect to the underlying model.
Most implemented papers
Meaningful Data Sampling for a Faithful Local Explanation Method
Data sampling has an important role in the majority of local explanation methods.
EXPLAN: Explaining Black-box Classifiers using Adaptive Neighborhood Generation
Defining a representative locality is an urgent challenge in perturbation-based explanation methods, which influences the fidelity and soundness of explanations.
Developing a Fidelity Evaluation Approach for Interpretable Machine Learning
Although modern machine learning and deep learning methods allow for complex and in-depth data analytics, the predictive models generated by these methods are often highly complex, and lack transparency.
Towards Better Understanding Attribution Methods
Finally, we propose a post-processing smoothing step that significantly improves the performance of some attribution methods, and discuss its applicability.
Can local explanation techniques explain linear additive models?
Local model-agnostic additive explanation techniques decompose the predicted output of a black-box model into additive feature importance scores.
SAME: Uncovering GNN Black Box with Structure-aware Shapley-based Multipiece Explanations
Post-hoc explanation techniques on graph neural networks (GNNs) provide economical solutions for opening the black-box graph models without model retraining.