zhiyuanyou / SAFECount

[WACV 2023] Few-shot Object Counting with Similarity-Aware Feature Enhancement
Other
124 stars 22 forks source link

Increasing batch size #5

Closed jaideep11061982 closed 1 year ago

jaideep11061982 commented 1 year ago

@zhiyuanyou is it possible to convert this into batch operation currently its a single image based both in training and inference. 1)can this converted to batch based operation. 2) Would increasing the batch size improve the gradients ?

3) I do not have DDP , i.e one gpu only can i comment this out in training

if distributed:

        dist.all_reduce(train_loss)
        dist.all_reduce(iter)

4) Now I am using backend as resnet50 and doing retraining using its pretrained wts but I get too less training loss ,is it expected

Train | Epoch : 1 / 200 | Iter: 1 / 3659 | lr: 2e-05 | Data: 0.38, Time: 12.12 | Loss: 0.0008869824814610183
 Train | GT: 200.0, Pred: 299.8 | Best Val MAE: inf, Best Val RMSE: inf
 Train | Epoch : 1 / 200 | Iter: 801 / 3659 | lr: 2e-05 | Data: 0.00, Time: 0.18 | Loss: 0.2721385396662299

5) Did you train for 200 epochs ?

zhiyuanyou commented 1 year ago

Hi~

  1. Actually not. Some operations only support _batchsize == 1.
  2. The same as 1.
  3. Yes. You could comment these lines.
  4. The scale of loss is reasonable. Finally, it could reach 1e-6 even 1e-8.
  5. 200 epochs for FSC-147.
jaideep11061982 commented 1 year ago

@zhiyuanyou
1) if I use a different image size say rectangular 512,768 . Which parameter do I have to adjust as I get 0 for one of dimenstion of feat after using crop roi function 2) How do you chose the outstride, do I have to change it depending on the image resize shape I chose ? 3) Also I use resnet50, does the outstride value change in config as per backbone ? 4) I put some debug statements before crop roi , I get this for 512,768

feat torch.Size([1, 256, 128, 192]) tensor([[216., 497., 241., 527.],
        [186., 403., 201., 425.],
        [330., 498., 366., 518.],
        [256., 245., 278., 279.]],

How does crop roi able to extract desired regions, as it positions overshot the feat shape on h,w dimensions

5) why do we divide the boxes by outstride in crop_roi function boxes_scaled = boxes / out_stride

zhiyuanyou commented 1 year ago

Hi~

  1. 0 dimension in _featuresize is because some boxes are too small. What dataset did you use? FSC-147 or your own dataset?
  2. Choose _outstride according to _feature_size = image_size / outstride.
  3. No.
  4. I do not understand. Could you please describe this issue clearer?
  5. The boxes is in original _imagesize, we should convert it to _featuresize, thus we need to divide _outstride.
jaideep11061982 commented 1 year ago

@zhiyuanyou i figured out the logic. your crop roi and bbx transformation are based on the assumption that final fmap size will be /4 if there are changes into that then it disturbs BBx based crop computation in ROI

zhiyuanyou commented 1 year ago

Yes. Cropping ROI is operated on feature map (whose size is _imagesize / _outstride), thus we need to divide _outstride first.