Closed csrwang closed 1 year ago
If --catal is set as True, then it means the model uses categorical balanced sampling on the source domain. In this case, --batch_size means the number of images sampled for each category. So, the actual batch is equal to --batch_size * the_number_of_categories. And --bs_limit means the maximum limit for the actual batch size. This avoids the overflow of the GPU ram.
For example in Office 31, if --catal is set as True, the --batch_size == 2, then the actual batch fed into the GPU is --batch_size 31 (the number of classes). And --bs_limit is constraint such that --batch_size 31 <= --bs_limit.
If --catal is set as False, then there is no categorical balanced sampling, and --batch_size just means the actual batch fed into the GPU.
The file cada_styflip.py cannot be run, even though I have installed xxx.
The error message is: ValueError: Expected input batch_size (90) to match target batch_size (84). What is the relationship between bs_limit and batch_size? Why does the code frequently give errors due to batch_size?
Here are my code parameter settings: [cada_styflip.py][INFO] Namespace(amp=True, aug_num=1, batch_size=32, bs_limit=96, catal=True, dataset='office-home', discriminator='Categorical_discriminator', div='kl', eval_epoch=1, feat_dim=256, freq=False, gpu_id='4', hid_dim=1024, iter_epoch=500, log_step_pre=10, lr=0.01, lr_decay=0.75, lr_gamma=0.001, margin=0.05, max_epoch=30, momentum=0.9, nesterov=True, net='resnet50', num_classes=65, num_domains=4, p_adain=0.01, pretrained=True, rand_aug=False, resume=False, resume_file=None, seed=2020, sigma=0.1, source=0, sub_log='0727_catal_amp_home', temp=0.05, weight_decay=0.001, with_permute_adain=False, workers=3)
When I switch the dataset to office31, the code can run, with the parameters: [cada_styflip.py][INFO] Namespace(amp=True, aug_num=1, batch_size=32, bs_limit=96, catal=True, dataset='office31', discriminator='Categorical_discriminator', div='kl', eval_epoch=1, feat_dim=256, freq=False, gpu_id='4', hid_dim=1024, iter_epoch=500, log_step_pre=10, lr=0.01, lr_decay=0.75, lr_gamma=0.001, margin=0.05, max_epoch=30, momentum=0.9, nesterov=True, net='resnet50', num_classes=31, num_domains=3, p_adain=0.01, pretrained=True, rand_aug=False, resume=False, resume_file=None, seed=2020, sigma=0.1, source=0, sub_log='0727_catal_amp', temp=0.05, weight_decay=0.001, with_permute_adain=False, workers=3). However, the accuracy is only 65.67%, which is far from the 92.4% mentioned in the paper.
But when I set batch_size=64 for office31, I get the error ValueError: Expected input batch_size (50) to match target batch_size (36).
What is the relationship between bs_limit and batch_size? Why does the code frequently give errors due to batch_size?