HanxunH / CognitiveDistillation

[ICLR2023] Distilling Cognitive Backdoor Patterns within an Image
https://arxiv.org/abs/2301.10908
MIT License
31 stars 2 forks source link

Got problems in training model #1

Open lui1343 opened 1 year ago

lui1343 commented 1 year ago

Thank you for your work and code! When i try to train a model , i use the config you stored in ./configs. The config i set is like below: python train.py --exp_path output --exp_config configs/celeba --exp_name celeba_rn18 But i meet some problems like:

File "train.py", line 49, in epoch_exp_stats loss = F.cross_entropy(logits, labels, reduction='none') File "/home/miniconda3/envs/cog/lib/python3.8/site-packages/torch/nn/functional.py", line 3026, in cross_entropy return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index, label_smoothing) TypeError: cross_entropy_loss(): argument 'input' (position 1) must be Tensor, not list

I didn't change the code, so i wonder if i set the config in a wrong way. If you know why this happened , please offer me some help, Thanks a lot!

HanxunH commented 1 year ago

Thanks for your interest in our work. For CelebA experiments, please use train_face_attr.py. All other arguments are the same.

I have just added this file: https://github.com/HanxunH/CognitiveDistillation/blob/main/train_face_attr.py