Closed wangjl1993 closed 1 year ago
๐ Hello @wangjl1993, thank you for your interest in YOLOv5 ๐! Please visit our โญ๏ธ Tutorials to get started, where you can find quickstart guides for simple tasks like Custom Data Training all the way to advanced concepts like Hyperparameter Evolution.
If this is a ๐ Bug Report, please provide a minimum reproducible example to help us debug it.
If this is a custom training โ Question, please provide as much information as possible, including dataset image examples and training logs, and verify you are following our Tips for Best Training Results.
Python>=3.7.0 with all requirements.txt installed including PyTorch>=1.7. To get started:
git clone https://github.com/ultralytics/yolov5 # clone
cd yolov5
pip install -r requirements.txt # install
YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):
If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training, validation, inference, export and benchmarks on MacOS, Windows, and Ubuntu every 24 hours and on every commit.
We're excited to announce the launch of our latest state-of-the-art (SOTA) object detection model for 2023 - YOLOv8 ๐!
Designed to be fast, accurate, and easy to use, YOLOv8 is an ideal choice for a wide range of object detection, image segmentation and image classification tasks. With YOLOv8, you'll be able to quickly and accurately detect objects in real-time, streamline your workflows, and achieve new levels of accuracy in your projects.
Check out our YOLOv8 Docs for details and get started with:
pip install ultralytics
@wangjl1993 Hi wangjl1993, I have encountered the same problem as you. Do you have a good solution? I would like to consult with you
Hello @simonj123,
The warning message "onnx2trt_utils.cpp:377: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32." indicates that your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. An attempt is being made to cast down the weights to INT32. Therefore, a possible solution could be to modify your ONNX model to have INT32 weights instead of INT64 weights before converting it to TensorRT engine.
You can refer to this great tutorial on how to convert an ONNX model to TensorRT engine with the help of the TensorRT ONNX parser. It also has a section for how to directly modify the onnx model. I hope this helps!
Hello @simonj123,
The warning message "onnx2trt_utils.cpp:377: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32." indicates that your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. An attempt is being made to cast down the weights to INT32. Therefore, a possible solution could be to modify your ONNX model to have INT32 weights instead of INT64 weights before converting it to TensorRT engine.
You can refer to this great tutorial on how to convert an ONNX model to TensorRT engine with the help of the TensorRT ONNX parser. It also has a section for how to directly modify the onnx model. I hope this helps!
Thx~ I have tried the scheme before, the warning disappeared. However, It can not solve the problem. It seems that this can only be solved by relying on TensorRT updates to support INT64. This link great tutorial returns 404.
@wangjl1993 Thank you
this
@wangjl1993 Hi wangjl1993, I have encountered the same problem as you. Do you have a good solution? I would like to consult with you
Not yet. I can fix this warning but not problem. A few days ago๏ผ I converted onnx from INT64 to INT32 using onnx-typecast. It seems that this can only be solved by relying on TensorRT updates to support INT64.
@wangjl1993 based on the information you have provided, it seems like this issue cannot be completely solved for now, especially if you require INT64 weights for your model. TensorRT does not natively support INT64, and the casting of INT64 weights to INT32 in TensorRT may cause changes to the results of your model.
As you have mentioned, your ONNX model with INT64 weights cannot be directly converted to TensorRT engine. Converting your ONNX model to have INT32 weights may remove the warning, but may not completely solve the underlying issue, especially if you need INT64 weights for your model. Relying on TensorRT updates to support INT64 may be a viable option in the future.
You might also consider using other deep learning frameworks that support INT64 weights, such as TensorFlow or PyTorch, or building a custom layer in TensorRT to support INT64 weights. However, these may require substantial changes to your current workflow.
@wangjl1993 based on the information you have provided, it seems like this issue cannot be completely solved for now, especially if you require INT64 weights for your model. TensorRT does not natively support INT64, and the casting of INT64 weights to INT32 in TensorRT may cause changes to the results of your model.
As you have mentioned, your ONNX model with INT64 weights cannot be directly converted to TensorRT engine. Converting your ONNX model to have INT32 weights may remove the warning, but may not completely solve the underlying issue, especially if you need INT64 weights for your model. Relying on TensorRT updates to support INT64 may be a viable option in the future.
You might also consider using other deep learning frameworks that support INT64 weights, such as TensorFlow or PyTorch, or building a custom layer in TensorRT to support INT64 weights. However, these may require substantial changes to your current workflow.
Yes, that's correct. Thank you for your response.
@wangjl1993 you're welcome! Let us know if you have any further questions or issues. Good luck with your project!
@wangjl1993 you're welcome! Let us know if you have any further questions or issues. Good luck with your project!
Thank you for your help and kind words! I really appreciate your offer to help with any further questions or issues that may arise. I'll definitely reach out if I need any assistance. Thanks again, and have a great day!
You're welcome, @wangjl1993! It was my pleasure to assist you. Don't hesitate to reach out if you have any further questions or issues. Have a great day too!
๐ Hello there! We wanted to give you a friendly reminder that this issue has not had any recent activity and may be closed soon, but don't worry - you can always reopen it if needed. If you still have any questions or concerns, please feel free to let us know how we can help.
For additional resources and information, please see the links below:
Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!
Thank you for your contributions to YOLO ๐ and Vision AI โญ
@wangjl1993 did you find a solution to this? Im seeing different results when comparing engine files generated with tensorRT versions
@HeeebsInc It looks like you have encountered some inconsistencies when comparing inference results between engine files generated with different versions of TensorRT. Finding a solution to this issue may require further investigation and analysis. It's possible that differences in the TensorRT versions or configurations could be causing the variations in results.
To resolve this, you may want to consider the following steps:
If the issue persists and you are unable to determine the cause, it may be helpful to provide more details about your specific setup, including the versions of TensorRT, any customizations or optimizations, and the specific steps you are following to generate the engine files.
Remember, the YOLOv5 community and the Ultralytics team are here to support you in resolving any issues you encounter. Feel free to ask further questions or provide additional information for a more targeted solution. We appreciate your contribution and thank you for bringing this to our attention.
Hi, I have converted my custom model to engine file using model = YOLO("best.pt") path = model.export(format="engine",opset=12,workspace=4,device=0,half=True) The pt model have detection but engine have nothing ie input video is coming as output.
I am really stuck in it ,Thanks in advance.
Hi, I have converted my custom model to engine file using model = YOLO("best.pt") path = model.export(format="engine",opset=12,workspace=4,device=0,half=True) The pt model have detection but engine have nothing ie input video is coming as output.
I am really stuck in it ,Thanks in advance.
Have you solved this? I encountered the same question yesterday. I guess it's due to the custom model, but I don't know how to deal with it.
@zyl1121 have you resolved the issue? I encountered the same problem recently. It is possible that the issue is related to the custom model, but I am unsure how to address it. Could you provide more details about your custom model and any specific steps you have taken to troubleshoot the problem? This information will help us better understand the issue and provide more targeted assistance. Thank you!
@glenn-jocher @zyl1121 Hi glenn,zy, The reason i faced is because 1.My images in the dataset are highly correlated ie i have genrated the dataset from a 60fps video,The second reason is Trained with less number of samples (700 images) because of this *.pt file is working fine but when i quantised to fp16 using tensorrt framework I am not able to see the output in engine. I hope overfitted model with quantization occurs this problem
@glenn-jocher @zyl1121 Hi glenn,zy, The reason i faced is because 1.My images in the dataset are highly correlated ie i have genrated the dataset from a 60fps video,The second reason is Trained with less number of samples (700 images) because of this *.pt file is working fine but when i quantised to fp16 using tensorrt framework I am not able to see the output in engine. I hope overfitted model with quantization occurs this problem
Thanks for your reply! Have you tried to train your dataset with officially released model structue, like yolov5s.yaml. I tried with this model structure and export it into TensorRT engine, the exported .onnx and .engine file can both work normally. I think the problem is in the progress of exporting the customized .onnx file to .engine. I've read the export source code today but got no clue about how to solve this. And I can't get much related infomation online.
@zyl1121 @glenn-jocher when i tried to use trtexe tool for engine conversion of my model (yolov8n-seg) i have noticed that that engine will not work on this repo .So i tried with the default conversion given by the repo ,there onnx conversion is not required seperatley . If you are fusing the model in conversion process(pt--->onnx) ,chances of detection in engine is also less...
@zyl1121 have you resolved the issue? I encountered the same problem recently. It is possible that the issue is related to the custom model, but I am unsure how to address it. Could you provide more details about your custom model and any specific steps you have taken to troubleshoot the problem? This information will help us better understand the issue and provide more targeted assistance. Thank you!
Actually I posted a issue about this problem this morning, the link is here No object detected when using TensorRT engine with modified model structure. I provided some information in this issue including the customized model structure yaml file and the export log, I'm not sure what else should I provide. Maybe the .pt file might help? Please let me know.
@zyl1121 @glenn-jocher when i tried to use trtexe tool for engine conversion of my model (yolov8n-seg) i have noticed that that engine will not work on this repo .So i tried with the default conversion given by the repo ,there onnx conversion is not required seperatley . If you are fusing the model in conversion process(pt--->onnx) ,chances of detection in engine is also less...
I used export.py to convert the .pt file to .engine. Which automatically convert .pt to .onnx first, then to the .engine file. Since the onnx file generated automatically can work correctly, I assume there are some problem in the TensorRT engine export process.
I hope the issue belongs to your dataset... @glenn-jocher can you please tell your suggestion ...thanks in advance
I hope the issue belongs to your dataset... @glenn-jocher can you please tell your suggestion ...thanks in advance
I tried to train an unmodified yolov5s model with my dataset and the problem disappear, I'm able to use detect.py to inference with .engine file and get result images with bounding boxes correctly, so I think it's not the dataset problem. Hope this problem can be solved.
@Manueljohnson063 hello,
Thank you for reaching out. It seems like the issue may not be related to the dataset itself. Based on your description, it appears that the problem lies within the process of converting the .pt file to the .engine file using the TensorRT framework.
In such cases, it is recommended to double-check the conversion process to ensure that all necessary steps and configurations are followed correctly. Additionally, you may want to consider using a more standardized model structure, such as yolov5s.yaml, to ensure compatibility during the conversion process.
If the issue persists, it would be helpful to provide additional details, such as the specific steps and configurations you used during the conversion, any error messages or logs generated during the process, and any other relevant information that could aid in troubleshooting the problem.
Please let us know if there are any further developments or if you have any additional questions. We'll be happy to assist you.
Best regards, Glenn Jocher
this
@wangjl1993 Hi wangjl1993, I have encountered the same problem as you. Do you have a good solution? I would like to consult with you
Not yet. I can fix this warning but not problem. A few days ago๏ผ I converted onnx from INT64 to INT32 using onnx-typecast. It seems that this can only be solved by relying on TensorRT updates to support INT64.
Hello,have you solved the problem? How do you solve this problem?
Search before asking
Question
yolov5s.engine's inference results: yolov5s.pt's inference results:
When I convert '.pt' to '.engine', there is one warning: onnx2trt_utils.cpp:377: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32. Then I check on Netron: some OPs like 'Reshape' use INT64 weights. Howere, TensorRT didn't support INT64.
Is there any other way to fix this๏ผ Thx~
Additional
No response