zhanghang1989 / PyTorch-Encoding

A CV toolkit for my papers.
https://hangzhang.org/PyTorch-Encoding/
MIT License
2.04k stars 450 forks source link

About the crop size for training and testing #86

Open PkuRainBow opened 6 years ago

PkuRainBow commented 6 years ago
class BaseNet(nn.Module):
    def __init__(self, nclass, backbone, aux, se_loss, dilated=True, norm_layer=None,
                 base_size=576, crop_size=608, mean=[.485, .456, .406],
                 std=[.229, .224, .225], root='~/.encoding/models'):

Previously, I noticed that you choose base_size=520, crop_size=480 and now you change them to base_size=576, crop_size=608.

However, I still have a concern about it, the 608 should be larger than the largest highth or width in your dataset, so we should increase this parameter if we are dealing with larger datasets.

Besides, I noticed that you mentioned that your MultiEvalModule only support single image evalution, but you also provide the batch size parameter, which is contradictory....... I guess we can only set the batch size as 1 during testing phase.

  def forward(self, image):
        """Mult-size Evaluation"""
        # only single image is supported for evaluation
zhanghang1989 commented 6 years ago
  1. Training augmentation is handled here https://github.com/zhanghang1989/PyTorch-Encoding/blob/master/encoding/datasets/base.py#L61-L99
  2. Sorry about the confusion. Single image per-gpu is supported, but we can still use multi-gpu, and the test batch size is set to number of gpus https://github.com/zhanghang1989/PyTorch-Encoding/blob/master/experiments/segmentation/test.py#L123
qiulesun commented 6 years ago

''the test batch size is set to number of gpus'' is that necessary?

zhanghang1989 commented 6 years ago

That is not necessary, but it is a lazy solution.

huanghoujing commented 5 years ago

@zhanghang1989 I think you are talking about this part of MultiEvalModule? Would it be better to add assertion code and some notifications to make sure that test batch size is no greater than number of GPUs?

huanghoujing commented 5 years ago

Well, it has been solved, which I just found. So there is no problem now.

huanghoujing commented 5 years ago

@zhanghang1989 It seems necessary to change collate_fn for dataloader when

Otherwise, the default torch.stack(batch, 0, out=out) will raise error.

huanghoujing commented 5 years ago

Well, the author has carefully considered this case as this, which I just found. Just ignore my ignorance.