Open mac2win opened 9 months ago
There are some problems with the lack of a lot of pre-training models, how to correspond to weights, and the provided link perception.Can you provide your cloud training model in the code related to completing this paper?I feel that some of the links given are not usable, or are difficult to correspond to parts of the code.
Hi,
For all the model weights, I basically use model weights from DiffusionClip: https://github.com/gwang-kim/DiffusionCLIP
Hope this helps.
Prepare Diffusion Models For CelebA-HQ identity and gender datasets, we use the diffusion model weights pretrained on CelebA-HQ from DiffusionCLIP: IR-SE50. For AFHQ dataset, we use the diffusion model weights from ILVR+ADM:drive. and finetune it with the following (although the best way is to train a diffusion model for AFHQ dataset from scratch):
python main_afhq_train.py Generate Attacks The commands for CelebA-HQ identity dataset are stored in commands/command_for_celebaHQ_identity_ST_approach and commands/command_for_celebaHQ_identity_LM_approach folders.
The commands for CelebA-HQ gender dataset are stored in commands/command_for_celebaHQ_gender folder.
The commands for AFHQ dataset are stored in commands/command_for_AFHQ folder.
Basically, taking CelebA-HQ identity dataset as an example, for the ST approach, we have:
python main.py --attack \ --config celeba.yml \ --exp experimental_log_path \ --t_0 500 \ --n_inv_step 40 \ --n_test_step 40 \ --n_precomp_img 100 --mask 9 --diff 9 --tune 0 --black 0