Closed darouwan closed 1 week ago
π Hello @darouwan, thank you for bringing this up and for your interest in Ultralytics π! We suggest checking out our Docs for details on usage, setup, and troubleshooting. Many common questions, including those for Python and CLI, are already addressed there.
From your description, it seems you may have encountered a bug π. To help us investigate further, could you confirm the behavior by providing a detailed minimum reproducible example? It looks like you've already shared a scriptβthanks for that! If possible, please also share any logs, stack traces, or additional details that might help debug the issue.
In the meantime, ensure you're using the latest version of the ultralytics
package, as this might resolve the issue. You can upgrade with the following command in your terminal:
pip install -U ultralytics
Additionally, make sure your TensorRT environment and the setup align with our specified requirements.
For immediate solutions or collaborative help, consider joining our community:
YOLO runs seamlessly on the following environments, if you'd like to test on any of them:
If the above badge is green, all Ultralytics CI tests are passing across Windows, macOS, and Ubuntu environments for all supported Modes and Tasks.
π‘ Rest assured, this is an automated response, and an Ultralytics engineer will review your issue in detail and follow up with additional support soon. Thank you for your patience! π
Found in batch parameter in https://docs.ultralytics.com/modes/export/#arguments. Close this issue.
@darouwan the batch size error occurs because your TensorRT engine was exported without enabling dynamic batch dimensions. To fix this, re-export your model with the dynamic=True
argument to allow variable batch sizes and specify your maximum batch size using batch=N
:
model.export(format="engine", dynamic=True, batch=8) # max batch=8
For detailed guidance see our TensorRT Export documentation: https://docs.ultralytics.com/integrations/tensorrt/#usage
Search before asking
Ultralytics YOLO Component
Predict
Bug
I found if it seems tensorrt with yolo11 doesn't support image inference in batch. For example, when I use pt file to initialize the model, and use the images in opencv format in array, like [img1, img2, img3], it works fine. But if I use the tensorrt model file, if I use img1 or [img1], it still works. If I use [img1, img2], it outputs:
AssertionError: input size torch.Size([2, 3, 640, 640]) not equal to max model size (1, 3, 640, 640)
Environment
Ultralytics 8.3.63 οΏ½ Python-3.11.10 torch-2.5.1+cu124 CUDA:0 (NVIDIA L40, 45373MiB) Setup complete β (12 CPUs, 251.5 GB RAM, 36.4/193.6 GB disk)
OS Linux-5.15.0-1055-gkeop-x86_64-with-glibc2.35 Environment Linux Python 3.11.10 Install git RAM 251.51 GB Disk 36.4/193.6 GB CPU Intel Xeon Platinum 8462Y+ CPU count 12 GPU NVIDIA L40, 45373MiB GPU count 1 CUDA 12.4
numpy β 1.23.5>=1.23.0 numpy β 1.23.5<2.0.0; sys_platform == "darwin" matplotlib β 3.10.0>=3.3.0 opencv-python β 4.11.0.86>=4.6.0 pillow β 10.2.0>=7.1.2 pyyaml β 6.0.2>=5.3.1 requests β 2.32.3>=2.23.0 scipy β 1.15.1>=1.4.1 torch β 2.5.1+cu124>=1.8.0 torch β 2.5.1+cu124!=2.4.0,>=1.8.0; sys_platform == "win32" torchvision β 0.20.1+cu124>=0.9.0 tqdm β 4.66.5>=4.64.0 psutil β 6.1.0 py-cpuinfo β 9.0.0 pandas β 2.2.3>=1.1.4 seaborn β 0.13.2>=0.11.0 ultralytics-thop β 2.0.14>=2.0.0
Minimal Reproducible Example
Additional
No response
Are you willing to submit a PR?