Closed ELTON-LGB closed 3 years ago
You may lose accuracy if you reduce input size; even though it may give you a slight speed-up. The error could be due to the resize values hardcoded into the decoder layers in the model definition.
Thanks so much!
Thanks for this awesome repository! Thanks for your great researching!
I passed training for slim512 net, and got what I want. My question is:
How can I change the input tensor shape to [256, 256, 3]? I changed the Input to [256, 256,3], but failed, I got error: ValueError: Operands could not be broadcast together with shapes (32, 32, 128) (16, 16, 128), so it is not easy only to change Input shapes and image_size.
"The inputs are initially downsampled from a size of 512 to 128 (i,e 1/4'th)", If changed input tensor shape to [256,256,3], can save reference time with same accuracy? or Not?
I want to keep the current accuracy, and save more reference time, how can I improve the slim-net? change input shape can do it?
Thanks again for your great works!