Membership Inference Attack
55 papers with code • 0 benchmarks • 0 datasets
Benchmarks
These leaderboards are used to track progress in Membership Inference Attack
Libraries
Use these libraries to find Membership Inference Attack models and implementationsMost implemented papers
Membership Inference Attacks against Machine Learning Models
We quantitatively investigate how machine learning models leak information about the individual data records on which they were trained.
ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models
In addition, we propose the first effective defense mechanisms against such broader class of membership inference attacks that maintain a high level of utility of the ML model.
Synthesis of Realistic ECG using Generative Adversarial Networks
Finally, we discuss the privacy concerns associated with sharing synthetic data produced by GANs and test their ability to withstand a simple membership inference attack.
MemGuard: Defending against Black-Box Membership Inference Attacks via Adversarial Examples
Specifically, given a black-box access to the target classifier, the attacker trains a binary classifier, which takes a data sample's confidence score vector predicted by the target classifier as an input and predicts the data sample to be a member or non-member of the target classifier's training dataset.
Disparate Vulnerability to Membership Inference Attacks
Differential privacy bounds disparate vulnerability but can significantly reduce the accuracy of the model.
Membership Inference Attacks on Machine Learning: A Survey
In recent years, MIAs have been shown to be effective on various ML models, e. g., classification models and generative models.
Membership Inference Attacks From First Principles
A membership inference attack allows an adversary to query a trained machine learning model to predict whether or not a particular example was contained in the model's training dataset.
Safety and Performance, Why not Both? Bi-Objective Optimized Model Compression toward AI Software Deployment
By simulating the attack mechanism as the safety test, SafeCompress can automatically compress a big model to a small one following the dynamic sparse training paradigm.
Understanding Membership Inferences on Well-Generalized Learning Models
Membership Inference Attack (MIA) determines the presence of a record in a machine learning model's training data by querying the model.
Machine Learning with Membership Privacy using Adversarial Regularization
In this paper, we focus on such attacks against black-box models, where the adversary can only observe the output of the model, but not its parameters.