Closed zishanahmed08 closed 4 years ago
You can train network on any size you like, although I've noticed that non-square resolution in such case produces artifacts. During the network conversion for inference you must specify your arbitrary input size which will be used for inference since resizing operation is embedded in network graph I believe.
Hi @zishanahmed08 the resolution scaling rule is for better efficiency, but you can train and network with any resolution and run inference in any resolution. This is because convolutional network weights are independent to image size (thanks to the "convolutional" operations).
Again, the scaling rules are mostly for COCO, and you may need different training image sizes for other datasets.
the tutorial states that we can set input image size to any desired value .however the paper states as below d) Input image resolution: Since the feature levels 3–7 are used in BiFPN, the input resolution must be divisible by 2⁷ = 128, so the resolution of the image is linearly increased using equation (3).
So how is this managed? is the arbitrary input size provided by use resized to default size?