Explainable Artificial Intelligence (XAI)
206 papers with code • 0 benchmarks • 2 datasets
Explainable Artificial Intelligence
Benchmarks
These leaderboards are used to track progress in Explainable Artificial Intelligence (XAI)
Most implemented papers
RISE: Randomized Input Sampling for Explanation of Black-box Models
We compare our approach to state-of-the-art importance extraction methods using both an automatic deletion/insertion metric and a pointing metric based on human-annotated object segments.
Proposed Guidelines for the Responsible Use of Explainable Machine Learning
Explainable machine learning (ML) enables human learning from ML, human appeal of automated model decisions, regulatory compliance, and security audits of ML models.
Software for Dataset-wide XAI: From Local Explanations to Global Insights with Zennit, CoRelAy, and ViRelAy
Deep Neural Networks (DNNs) are known to be strong predictors, but their prediction strategies can rarely be understood.
Contrastive Explanations with Local Foil Trees
Recent advances in interpretable Machine Learning (iML) and eXplainable AI (XAI) construct explanations based on the importance of features in classification tasks.
AudioMNIST: Exploring Explainable Artificial Intelligence for Audio Analysis on a Simple Benchmark
Explainable Artificial Intelligence (XAI) is targeted at understanding how models perform feature selection and derive their classification decisions.
Do Not Trust Additive Explanations
Explainable Artificial Intelligence (XAI)has received a great deal of attention recently.
TX-Ray: Quantifying and Explaining Model-Knowledge Transfer in (Un-)Supervised NLP
While state-of-the-art NLP explainability (XAI) methods focus on explaining per-sample decisions in supervised end or probing tasks, this is insufficient to explain and quantify model knowledge transfer during (un-)supervised training.
On the Explanation of Machine Learning Predictions in Clinical Gait Analysis
Machine learning (ML) is increasingly used to support decision-making in the healthcare sector.
Ground Truth Evaluation of Neural Network Explanations with CLEVR-XAI
The rise of deep learning in today's applications entailed an increasing need in explaining the model's decisions beyond prediction performances in order to foster trust and accountability.
Quantifying Explainability of Saliency Methods in Deep Neural Networks with a Synthetic Dataset
Heatmaps can be appealing due to the intuitive and visual ways to understand them but assessing their qualities might not be straightforward.