Closed jrukavina closed 5 months ago
try eval_spatial_size: [640, 640]
-> eval_spatial_size: ~
to support dynamic input size
Thanks, this worked! Although I also had to modify L58 to ~ Unfortunately, exporting to ONNX with dynamic width and height still does not work. Is there any way around this?
you can try to modify this logic to adapt your needs. ( move pos_embed init
to forward
https://github.com/lyuwenyu/RT-DETR/blob/main/rtdetr_pytorch/src/zoo/rtdetr/hybrid_encoder.py#L255 https://github.com/lyuwenyu/RT-DETR/blob/main/rtdetr_pytorch/src/zoo/rtdetr/hybrid_encoder.py#L297
I will try that, thanks!
Hi, you have a great repository, thanks for open sourcing it!
I am wondering if there is a way to run inference with your model on images of different dimensions? I have tried exporting to ONNX with dynamic width and height dimensions but that did't work. Also, when trying to inference with images of different sizes in pytorch I get something like the following error:
File "/.../RT-DETR/rtdetr_pytorch/tools/../src/zoo/rtdetr/hybrid_encoder.py", line 141, in with_pos_embed return tensor if pos_embed is None else tensor + pos_embed