I used to use detectron2, in which a FasterRCNN will resize the image in different sizes to a fixed size in one batch in order to form a tensor. And the resizing is implemented as a non-training torch module in model .
Now I'm using mmdet 3 and I wonder how it deals with it. When we use the multiscale training, the size of each image will be different in a batch. What happen after one image is resized to a random size and before it is feed into a FasterRCNN ?
I used to use detectron2, in which a FasterRCNN will resize the image in different sizes to a fixed size in one batch in order to form a tensor. And the resizing is implemented as a non-training torch module in model .
Now I'm using mmdet 3 and I wonder how it deals with it. When we use the multiscale training, the size of each image will be different in a batch. What happen after one image is resized to a random size and before it is feed into a FasterRCNN ?