krumo / Domain-Adaptive-Faster-RCNN-PyTorch

Domain Adaptive Faster R-CNN in PyTorch
MIT License
304 stars 68 forks source link

why is only batch size of 2 supported for consistency_loss? #13

Closed frezaeix closed 3 years ago

frezaeix commented 4 years ago

Hi,

Thanks for sharing your code. I have a question:

I would like to compute and log the losses for the validation set during training. For doing this I followed a more recent version of mask_rcnn lines(128-174). However, the problem is that for evaluating on validation set batch size is 1 and for computing, the consistency loss batch size must be 2. I would like to know why is it the case? Is it ok to use the batch size of 2 for computing this loss on the validation set? Or do you have any other suggestions as a workaround regarding this? Thank you

krumo commented 4 years ago

Hi @frezaeix , according to the definition in the original paper, of course consistency loss could be computed when batch size is 1. I assume the batch size is 2 when computing the consistency loss mainly for training process where each batch should consist of 1 source image and 1 target image.

If you want to compute consistency loss for validation, I think you could comment the assertion and add a conditioning at https://github.com/krumo/Domain-Adaptive-Faster-RCNN-PyTorch/blob/f75c583f8dbe6d7a2a87272fde3b794773c38527/maskrcnn_benchmark/layers/consistency_loss.py#L19:

if N==1: 
    img_fea_mean = img_fea_per_level[i].view(1, 1).repeat(len_ins, 1)
elif N==2:
    img_fea_mean = img_fea_per_level[i].view(1, 1).repeat(intervals[i], 1)
else:
    raise NotImplementedError

Unfortunately now I cannot access to a desktop with GPUs so I cannot test the code above. But I believe for batch size is 1, you wouldn't have to change a lot of codes.