Closed mohkan1 closed 4 months ago
👋 Hello @mohkan1, thank you for your interest in Ultralytics YOLOv8 🚀! We recommend a visit to the Docs for new users where you can find many Python and CLI usage examples and where many of the most common questions may already be answered.
If this is a 🐛 Bug Report, please provide a minimum reproducible example to help us debug it.
If this is a custom training ❓ Question, please provide as much information as possible, including dataset image examples and training logs, and verify you are following our Tips for Best Training Results.
Join the vibrant Ultralytics Discord 🎧 community for real-time conversations and collaborations. This platform offers a perfect space to inquire, showcase your work, and connect with fellow Ultralytics users.
Pip install the ultralytics
package including all requirements in a Python>=3.8 environment with PyTorch>=1.8.
pip install ultralytics
YOLOv8 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):
If this badge is green, all Ultralytics CI tests are currently passing. CI tests verify correct operation of all YOLOv8 Modes and Tasks on macOS, Windows, and Ubuntu every 24 hours and on every commit.
@mohkan1 hey there! Thanks for providing detailed information about the issue you're encountering with YOLOv8 in the YOLOv8-CPP-Inference example.
From your description, it looks like when switching models from YOLOv5 to YOLOv8, you encounter distinct results or possibly an error. A common cause of such problems can be related to differences in model inputs and outputs structure, or ONNX model conversion specificities.
Let's start troubleshooting with the following:
Model Input/Output Check: Ensure the model inputs and outputs are correctly configured for YOLOv8. Differences in input dimensions or preprocessing could cause issues.
ONNX Model Verification: Double-check that the YOLOv8 ONNX model was correctly converted and isn't corrupted. Re-export it if necessary.
Code Adjustments: Make sure all model-specific parameters (e.g., input size, class names, anchors etc.) align with YOLOv8's specifications.
Dependencies: Confirm that all dependencies particularly OpenCV and ONNX are up to date, as outdated versions might lead to unexpected behaviors.
If these steps don't resolve the issue, please provide any error messages or odd behaviors you notice when you switch to YOLOv8. This will help further narrow down the problem! 🛠️
@glenn-jocher Thanks for your suggestions, I have solved the issue. It was wrong model input output.
But now I got a new issue and wondering if you have any idea about why this is happening and how one can solve it ?
I have sat the variable runOnGPU to true but it will always switches to CPU.
Hey @mohkan1, glad to hear you resolved the model input/output issue! 🎉 Regarding your new problem with the runOnGPU
variable, it sounds like the GPU isn't being recognized or utilized properly.
Here are a couple of quick checks and fixes you might consider:
torch.cuda.is_available()
in your Python environment. This will confirm if PyTorch can access your GPU.device = 'cuda' if torch.cuda.is_available() else 'cpu'
model.to(device)
If these steps don't help, providing the specific error messages or behavior when attempting to use the GPU could give more clues on what might be going wrong. Keep us posted!
👋 Hello there! We wanted to give you a friendly reminder that this issue has not had any recent activity and may be closed soon, but don't worry - you can always reopen it if needed. If you still have any questions or concerns, please feel free to let us know how we can help.
For additional resources and information, please see the links below:
Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!
Thank you for your contributions to YOLO 🚀 and Vision AI ⭐
Search before asking
YOLOv8 Component
Other
Bug
The project structure of ultralytics/examples/YOLOv8-CPP-Inference
The code in the example ultralytics/examples/YOLOv8-CPP-Inference/main.cpp
When using yolov5, it yeilds the following results:
But when using the yolov8, it yields the following:
Any idea why the model yolov8 not working here ?
Environment
No response
Minimal Reproducible Example
No response
Additional
No response
Are you willing to submit a PR?