Hello, first of all, thank you very much for your great work, I learned a lot from it. And now I encountered a problem.
I changed the feat_shapes in net code so that I can test images of different sizes, when I recorded the inference time of the vgg_ssd300, I found a curious phenomenon:
the inference time of image(7201280) is shorter than the image(300760)
I'm very confused about it so I did some tests, and I found:
when the length-width ratio is close to 1:2, the inference time will be extremely large, even larger than the image which has bigger size but the length-width ratio is 1:1
my test result is:
t(720, 1280, 3) < t(306, 763, 3)
t(720, 1280, 3) < t(308, 665, 3)
t(720, 1280, 3) < t(307, 1144, 3)
t(720, 1280, 3) < t(302, 739, 3)
this only happened on GPU. On CPU, whatever the length-width ratio is, the time is always longer when the image size is larger.
Hello, first of all, thank you very much for your great work, I learned a lot from it. And now I encountered a problem.
I changed the feat_shapes in net code so that I can test images of different sizes, when I recorded the inference time of the vgg_ssd300, I found a curious phenomenon:
the inference time of image(7201280) is shorter than the image(300760)
I'm very confused about it so I did some tests, and I found:
Has anyone found this problem?