This is a PyTorch implementation of our paper at ICCV2021:
Knowledge Enriched Distributional Model Inversion Attacks [paper] [arxiv]
We propose a novel 'Inversion-Specific GAN' that can better distill knowledge useful for performing attacks on private models from public data. Moreover, we propose to model a private data distribution for each target class which refers to 'Distributional Recovery'.
This code has been tested with Python 3.6, PyTorch 1.0 and cuda 10.0.
python train_classifier.py
python k+1_gan.py
.python binary_gan.py
.Run
python recovery.py
--model
chooses the target model to attack.--improved_flag
indicates if an inversion-specfic GAN is used. If False, then a general GAN will be applied.--dist_flag
indicates if distributional recovery is performed. If False, then optimization is simply applied on a single sample instead of a distribution.improved_flag
and dist_flag
be False, we are simply using the method proposed in [1].[1] Zhang, Yuheng, et al. "The secret revealer: Generative model-inversion attacks against deep neural networks." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020.