Closed AlexiAlp closed 1 week ago
π Hello @AlexiAlp, thank you for your detailed report and for your interest in Ultralytics π! We recommend starting with the Ultralytics Docs to explore all functionality, including Modes and Usage Examples.
It looks like you've encountered a π bug. To help us debug this issue effectively, could you please confirm if the provided code snippet is the Minimum Reproducible Example (MRE)? If not, kindly update it with an isolated example that reproduces the error. Learn more about MREs here.
Additionally:
ultralytics
. Try upgrading with the following command:
pip install -U ultralytics
If you'd like input from the community while waiting for Ultralytics support:
YOLO models work seamlessly across the environments listed below. You may want to test your scenario in one of them to rule out environment-specific issues:
For further options related to model formats and troubleshooting, please visit our Prediction Documentation.
You can check the current CI status of the repository here:
This is an automated response to ensure swift engagement. An Ultralytics engineer will review and assist with your issue as soon as possible π. Thank you for your patience!
You have an ultralytics
folder which is outdated or broken in the same directory as the script. Delete the folder, or move your script to a folder where there's no ultralytics
folder.
ok, I will try, it works when i remove the project code folder and reinstall the env! But i encounter an error about benchmark onnx int8 error: benchmark(model="./yolo11n.pt", data="./coco.yaml", imgsz=640, int8=True, device="cuda", format="onnx") When i set int8=False, the progrom work well, but i set int8=True, it comes the following error:
ERROR βοΈ Benchmark failure for ONNX: ERROR βοΈ argument 'int8' is not supported for format='onnx'
Setup complete β
(8 CPUs, 61.4 GB RAM, 468.6/499.8 GB disk)
Traceback (most recent call last):
File "/data/code/yolo11/onnx_benchmark.py", line 4, in
int8 is not supported for ONNX like it says
OK
The int8
quantization is not supported for ONNX exports in Ultralytics. This parameter is only applicable to formats like TensorRT, OpenVINO, and CoreML as shown in our export formats table. For ONNX benchmarking, please omit the int8=True
argument. If you require INT8 quantization, consider using TensorRT format which supports it natively - see our TensorRT INT8 guide.
Edited to include complete output of benchmark command.
I am facing the same problem with a Jetson Orin Nano 8GB and the official docker image as described in the Jetson docs.
I start the container of ultralytics 8.3.51 with
t=ultralytics/ultralytics:8.3.51-jetson-jetpack5
sudo docker pull $t && sudo docker run -it --ipc=host --runtime=nvidia $t
I verify the version 8.3.51 running:
ultralytics version
# returns 8.3.51
For completeness, running yolo checks
returns:
WARNING β οΈ torchvision==0.14 is incompatible with torch==2.0.
Run 'pip install torchvision==0.15' to fix torchvision or 'pip install -U torch torchvision' to update both.
For a full compatibility table see https://github.com/pytorch/vision#installation
Creating new Ultralytics Settings v0.0.6 file β
View Ultralytics Settings with 'yolo settings' or at '/root/.config/Ultralytics/settings.json'
Update Settings with 'yolo settings key=value', i.e. 'yolo settings runs_dir=path/to/dir'. For help see https://docs.ultralytics.com/quickstart/#ultralytics-settings.
Ultralytics 8.3.51 π Python-3.8.10 torch-2.0.0a0+ec3941ad.nv23.02 CUDA:0 (Orin, 7337MiB)
Setup complete β
(6 CPUs, 7.2 GB RAM, 134.5/233.7 GB disk)
OS Linux-5.10.104-tegra-aarch64-with-glibc2.29
Environment Docker
Python 3.8.10
Install git
RAM 7.16 GB
Disk 134.5/233.7 GB
CPU ARMv8 Processor rev 1 (v8l)
CPU count 6
GPU Orin, 7337MiB
GPU count 1
CUDA 11.4
numpy β
1.23.5>=1.23.0
matplotlib β
3.7.5>=3.3.0
pillow β
10.4.0>=7.1.2
pyyaml β
6.0.2>=5.3.1
requests β
2.32.3>=2.23.0
scipy β
1.10.1>=1.4.1
torch β
2.0.0a0+ec3941ad.nv23.2>=1.8.0
torchvision β
0.14.1>=0.9.0
tqdm β
4.67.1>=4.64.0
psutil β
6.1.0
py-cpuinfo β
9.0.0
pandas β
2.0.3>=1.1.4
seaborn β
0.13.2>=0.11.0
ultralytics-thop β
2.0.13>=2.0.0
numpy β
1.23.5<2.0.0; sys_platform == "darwin"
torch β
2.0.0a0+ec3941ad.nv23.2!=2.4.0,>=1.8.0; sys_platform == "win32"
If I then run benchmarks:
yolo benchmark model=yolo11n.pt data=coco8.yaml imgsz=640
(full output below)
@iokarkan The error you posted is a result of a different earlier error. You need to post the whole thing.
@Y-T-G I have included the entire output of the benchmark command my original post.
Reinstall torch and torchvision
I am not sure why this would be required when using a official docker image. Even in the docs it says to skip to the TensorRT section after running with docker.
Could you provide some insight as to why this is needed?
@iokarkan I think you should open a new issue if the error is with the docker image in Jetson.
Yes, indeed. I think I'm coming to the conclusion that the 'key' error occurs when no benchmarks have successfully completed. This is a bug and requires its own issue.
Thanks for your help!
Thanks for identifying this! You're correct that the 'key' error occurs when no benchmarks complete successfully. This is a known issue we're investigating. For now, please verify your Torch/Torchvision versions match our Jetson compatibility guidelines. If issues persist, feel free to open a new GitHub issue with your full error logs.
Search before asking
Ultralytics YOLO Component
No response
Bug
The scripts i wrote is
(the same is https://docs.ultralytics.com/zh/modes/benchmark/#usage-examples):, but i got the error: ONNX: starting export with onnx 1.17.0 opset 19... ONNX: slimming with onnxslim 0.1.48... ONNX: export success β 0.7s, saved as 'yolo11n.onnx' (10.2 MB)
Export complete (0.9s) Results saved to /data/code/yolo11/ultralytics Predict: yolo predict task=detect model=yolo11n.onnx imgsz=640
benchmark(model="yolo11n.pt", data="coco8.yaml", imgsz=640, format="onnx")
File "/data/code/yolo11/ultralytics/ultralytics/utils/benchmarks.py", line 183, in benchmark
df = pd.DataFrame(y, columns=["Format", "Statusβ", "Size (MB)", key, "Inference time (ms/im)", "FPS"])
UnboundLocalError: local variable 'key' referenced before assignment
Validate: yolo val task=detect model=yolo11n.onnx imgsz=640 data=/usr/src/ultralytics/ultralytics/cfg/datasets/coco.yaml
Visualize: https://netron.app ERROR βοΈ Benchmark failure for ONNX: model='/data/code/yolo11/ultralytics/ultralytics/cfg/datasets/coco.yaml' is not a supported model format. Ultralytics supports: ('PyTorch', 'TorchScript', 'ONNX', 'OpenVINO', 'TensorRT', 'CoreML', 'TensorFlow SavedModel', 'TensorFlow GraphDef', 'TensorFlow Lite', 'TensorFlow Edge TPU', 'TensorFlow.js', 'PaddlePaddle', 'MNN', 'NCNN', 'IMX', 'RKNN') See https://docs.ultralytics.com/modes/predict for help. Setup complete β (8 CPUs, 61.4 GB RAM, 461.1/499.8 GB disk) Traceback (most recent call last): File "/data/code/yolo11/ultralytics/onnx_run.py", line 7, in
Environment
Ultralytics 8.3.70 π Python-3.10.0 torch-2.6.0+cu124 CUDA:0 (NVIDIA L20, 45589MiB) Setup complete β (8 CPUs, 61.4 GB RAM, 461.2/499.8 GB disk)
OS Linux-5.15.0-124-generic-x86_64-with-glibc2.35 Environment Linux Python 3.10.0 Install pip RAM 61.43 GB Disk 461.2/499.8 GB CPU Intel Xeon Gold 6462C CPU count 8 GPU NVIDIA L20, 45589MiB GPU count 1 CUDA 12.4
numpy β 2.1.1<=2.1.1,>=1.23.0 matplotlib β 3.10.0>=3.3.0 opencv-python β 4.11.0.86>=4.6.0 pillow β 11.1.0>=7.1.2 pyyaml β 6.0.2>=5.3.1 requests β 2.32.3>=2.23.0 scipy β 1.15.1>=1.4.1 torch β 2.6.0>=1.8.0 torch β 2.6.0!=2.4.0,>=1.8.0; sys_platform == "win32" torchvision β 0.21.0>=0.9.0 tqdm β 4.67.1>=4.64.0 psutil β 6.1.1 py-cpuinfo β 9.0.0 pandas β 2.2.3>=1.1.4 seaborn β 0.13.2>=0.11.0 ultralytics-thop β 2.0.14>=2.0.0
Minimal Reproducible Example
from ultralytics.utils.benchmarks import benchmark benchmark(model="yolo11n.pt", data="coco8.yaml", imgsz=640, format="onnx")
Additional
No response
Are you willing to submit a PR?