This is a code repo for Conflicting Interactions Among Protection Mechanisms for Machine Learning Models; to appear in AAAI 2023.
For autoalloc of GPUs: you need jc and nvidia-smi; by default, it's disabled.
This codebase uses wandb as the logging backend.
You need conda. Create a virtual environment and install requirements:
conda env create -f environment.yaml
To activate:
conda activate ml-conf-interests
To update the env:
conda env update --name ml-conf-interests --file environment.yaml
or
conda activate ml-conf-interests
conda env update --file environment.yaml
Disclaimer: Run all experiments from the $ROOT of the project.
Running DP + watermarking:
python3 -u -m src.main task=dp-wm
Running adversarial training + watermarking:
python3 -u -m src.main task=adv-wm
Running adversarial training + dp + watermarking:
python3 -u -m src.main task=adv-dp-wm
You also need to specify data/wm/model that you want to train with, e.g.:
python3 -u -m src.main task=dp-wm learner=mnist
Invoke --help
for more info.
To run experiments with different hyperparams, you can overwrite them from the CLI or create new config files. See hydra's documentation for more info.
Train models with this script, and use them from the official DI repo
This is the official repo. However, we use the modified version from this work which extends the original code to other datasets. That code is available upon request from the authors.