Fairness
1172 papers with code • 3 benchmarks • 20 datasets
Libraries
Use these libraries to find Fairness models and implementationsMost implemented papers
FairMOT: On the Fairness of Detection and Re-Identification in Multiple Object Tracking
Formulating MOT as multi-task learning of object detection and re-ID in a single network is appealing since it allows joint optimization of the two tasks and enjoys high computation efficiency.
AI Fairness 360: An Extensible Toolkit for Detecting, Understanding, and Mitigating Unwanted Algorithmic Bias
Such architectural design and abstractions enable researchers and developers to extend the toolkit with their new algorithms and improvements, and to use it for performance benchmarking.
Score-CAM: Score-Weighted Visual Explanations for Convolutional Neural Networks
Recently, increasing attention has been drawn to the internal mechanisms of convolutional neural networks, and the reason why the network makes specific decisions.
A Critic Evaluation of Methods for COVID-19 Automatic Detection from X-Ray Images
In this paper, we compare and evaluate different testing protocols used for automatic COVID-19 diagnosis from X-Ray images in the recent literature.
ELEVATER: A Benchmark and Toolkit for Evaluating Language-Augmented Visual Models
In general, these language-augmented visual models demonstrate strong transferability to a variety of datasets and tasks.
Learning Adversarially Fair and Transferable Representations
In this paper, we advocate for representation learning as the key to mitigating unfair prediction outcomes downstream.
Agnostic Federated Learning
A key learning scenario in large-scale applications is that of federated learning, where a centralized model is trained based on data originating from a large number of clients.
Learning to Pivot with Adversarial Networks
Several techniques for domain adaptation have been proposed to account for differences in the distribution of the data used for training and testing.
Preventing Fairness Gerrymandering: Auditing and Learning for Subgroup Fairness
We prove that the computational problem of auditing subgroup fairness for both equality of false positive rates and statistical parity is equivalent to the problem of weak agnostic learning, which means it is computationally hard in the worst case, even for simple structured subclasses.
An Empirical Study of Rich Subgroup Fairness for Machine Learning
In this paper, we undertake an extensive empirical evaluation of the algorithm of Kearns et al. On four real datasets for which fairness is a concern, we investigate the basic convergence of the algorithm when instantiated with fast heuristics in place of learning oracles, measure the tradeoffs between fairness and accuracy, and compare this approach with the recent algorithm of Agarwal et al. [2018], which implements weaker and more traditional marginal fairness constraints defined by individual protected attributes.