Closed tjvvai closed 3 days ago
chatGPT is a great tool. We use it every day. However, it makes mistakes. If someone says on this website that he installied CUDA 11 on its nano, chatGPT follows the auteur without proper checking. Better to use Perplexity AI. The same AI power, but it shows the references it cited. Now you can check the information later.
CUDA 11 can not be installed on a Jetson Nano. The low-level driver infrastructure lacks the support needed for the 11 version. Despite NVIDIA clear statement, it isn't possible, many people has tried. No one succeeded.
Thank you so much for the confirmation. I definitely believe in you more than ChatGPT. I guess the ChatGPT is still not that trustworthy, is it?
So currently, we managed to convert yolov10.pt
to .onnx
, but cannot convert .onnx
to .engine
due to the mod
operation not supported on TensorRT 8.0.1.6 as shown in the image below.
May I ask have you met similar issues before? Someone suggests writing a customized plugin to support that operation. what's your opinion?
thanks in advance!
And that's always the nature of things. While one piece of technology progresses and the other stands on hold, a day comes when the two don't match anymore.
The new onnx opset is needed to describe the YoloV10 model (the mod operator, for instance). The trtexec
found on the Nano doesn't support the latest opset.
You can try modifying the onnx model so it doesn't need the latest operations. That's hard and error prune.
Or you use an 'older' model, like YoloV8 or YoloV5. There isn't that much performance gain between YoloV10 and V8. The major difference is integrating the NMS function into the YoloV10 model, while YoloV8 has the same functionality in the post-processing.
Most of the time, on general topics, ChatGPT will do its work. When it comes to some niche details, expertise is still required. On this topic, you are definitely better than ChatGPT.
Thanks for the advice. For the Yolo model, I was wondering whether the yolov8 model still contains the mod
operation, considering the modulo division is quite basic. I will have a test with the yolov8 model on Jetson Nano and let you know the results.
One thing we might need your knowledge of is that we are trying to find some edge devices that can support Cuda 11. We are thinking about Jetson Orin Nano 8GB. Do you have any better recommendations?
Thanks again!
You have several options: Jetson Orin Nano. Perfect. However expensive. Rock 5C. The NPU hits 6 TOP. Cheap and very powerful. Use it all the time. China made. Raspberry Pi + AI Hat (Hailo). Hits 26 TOP. Modest in price. However, your bound to the Hailo software. Not the user friendliest.
Brilliant, thank you so much.
Currently, we are verifying our pipeline with the deployment on the farms. Making it work is our priority for now, so we might go for Jetson Orin Nano for that purpose. We could think about price and other factors in the scaling stage.
Many thanks for all your valuable opinions and expertise. It is very nice to chat with you!
Hope you have a wonderful day.
thank you so much for this image.
I was wondering whether we could get another image with cuda 11 installed. I was trying to run yolov10 model on jetson nano and managed to convert .pt model to .onnx format. However, when I convert to .engine model, it says mod is not supported on tensorrt 8.0.6.1. But to get a newer version of tensorRT, we need a newer cuda version. So Here I come. I asked chatgpt, it says, cuda 11.4 might be supported on jetson nano.
Is it possible to have an image with jetpack5.0 and cuda 11 built-in? Thanks in advance.
From chat gpt