Closed qianzhang2018 closed 5 years ago
@qianzhang2018 Because the networks are fully convolutional, the input size can vary between different iterations. But for the images in the same mini-batch, they should have the same size.
@qianzhang2018 For the images in the same mini-batch, they have the same size. You can see the parameter 'SIZE_DIVISIBILITY' in the .yaml file, it is used to adjust the mini-batch images into the same size by function to_image_list() in maskrcnn_benchmark/structures/image_list.py.
@ZhengMengbin why is SIZE_DIVISIBILITY =32?
You can modified it, it is just a rounding parameter, the result image size = ceil(input_size / SIZE_DIVISIBILITY) * SIZE_DIVISIBILITY.
@ZhengMengbin Thank you very much. i found that FCOS/maskrcnn_benchmark/data/transforms/transforms.py class Resize(object): and maskrcnn_benchmark/structures/image_list.py. to_image_list(tensors, size_divisible=0) both them can change the size of picture ?do they have some relationship?
Yes, they can both change the image size. Resize(object) function is used to resize the original input image. Owing to using multiple scale training, each original input image will be resized into different size in the mini-batch. But feature maps which are used to predict loc and conf informations are the same for each image in the mini-batch, so we should use to_image_list() function to padding all the images into same size (using zero to padding)
it is very clear,thank you! niu bi ,hahaha
---Original--- From: "ZhengMengbin"notifications@github.com Date: Thu, Jul 11, 2019 12:14 PM To: "tianzhi0549/FCOS"FCOS@noreply.github.com; Cc: "Mention"mention@noreply.github.com;"qianzhang2018"515205198@qq.com; Subject: Re: [tianzhi0549/FCOS] A QUESTION about input size (#87)
Yes, they can both change the image size. Resize(object) function is used to resize the original input image. Owing to using multiple scale training, each original input image will be resized into different size in the mini-batch. But feature maps which are used to predict loc and conf informations are the same for each image in the mini-batch, so we should use to_image_list() function to padding all the images into same size (using zero to padding)
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread.
My data set is slightly different in size for each picture.i found that the network will adjust the size of each batch of pictures.
targets = [target.to(device) for target in targets] # in FCOS/maskrcnn_benchmark/engine/trainer.py #66 print(images.tensors.size()) i get this torch.Size([2, 3, 768, 1344]) # batch size is 2 torch.Size([2, 3, 800, 1088]) Can the size of the pictures be different each time when they are sent to the network for training? because roi head?