Open vivien319 opened 1 year ago
I use 200 classes from CelebA to train the model, but if you want to include more classes, I think it will be fine. For the second question, yes, the pipeline is the same across all the experiments.
请问您能提供一下这些数据集的相关代码吗
Your work is amazing, but I have some questions regarding the limited descriptions in your paper about the experiments on PubFig & CelebA and Tiny-ImageNet & Caltech-256 How many classes does the CelebA dataset use for training?Are these experimental settings about PubFig & CelebA and Tiny-ImageNet & Caltech-256 the same as the CIFAR10/Tiny-ImageNet experiment: The argumention of POOD for surrogate model training stage, training epochs during attack phase, parameter settings for optimizer,random seeds and so on. I would appreciate it if you could provide the related code on these datasets or more specific experimental setup.