Closed OAMELLAL closed 4 months ago
@OAMELLAL
Is the model input shape [1,3,10241024]? If the inference image is only 10241024, it will be automatically resized to 640640. There is no need to change the value of ------input size. The default is 640, because the default yolo input is [1,3,640640]
@alanxinn
Yes the input shape is [1,3,10241024]
@alanxinn
Yes the input shape is [1,3,10241024]
If there is no problem with the model, then there is a high probability that there is a problem with image pre-processing and post-processing.
I found the solution, you have to change 8400 by 21504 in Yolo constructor if you use 1024 resolution
model output0 [1,84,21504] right?
@alanxinn yes 21504 come from the model output
Hello,
I'm experiencing an issue with inference when using resolutions different from 640. When training is done with a resolution of 640, the inference works fine. However, with other resolutions, it doesn't work as expected. I've tested the inference of my models with various resolutions on Python and it works well. However, with this particular application, it doesn't work as expected. The inference is performed, but it detects many bounding boxes scattered all over.
I tested several models with resolutions of 1024 and 2560