mgonzs13 / yolov8_ros

Ultralytics YOLOv8, YOLOv9, YOLOv10, YOLOv11 for ROS 2
GNU General Public License v3.0
308 stars 86 forks source link

Is there any problem if I run this code on a jetson xavier? #5

Open goodhsm2000 opened 1 year ago

goodhsm2000 commented 1 year ago

Hello. Thanks to the code you posted, I succeeded in detecting real-time objects with a webcam in a local environment. Using this code, I also tried to detect real-time objects with a real sense camera in Jetson Xavier. However, the camera image is experiencing a problem that it passes through the YOLO model well and does not detect any objects. (The "results" value is good without problems, but the "results[0].boxes" value is empty)

Is there any solution to this problem?

I'm using the ROS2 foxy version, so I installed the vision_msgs as a foxy version and changed the parameters to fit. (I succeeded in detecting objects in the same environment locally.)

Torch and Torchvision were also installed well for the jetpack version that is being used.

I would appreciate your advice if you know anything.

mgonzs13 commented 1 year ago

Hi @goodhsm2000, it seems that YOLOv8 is detecting nothing in your images. You can try reducing the threshold. Your objects may be getting bad scores. Have you checked if the images obtained from the real sense camera are correct? I mean visualizing them.

goodhsm2000 commented 1 year ago

Yes. I used the cv2.imshow command to check that the camera image is coming in well. I set the Threshold to 0.5, but it can't even detect the person right in front of the camera.

mgonzs13 commented 1 year ago

Try a threshold below 0.5. Which ultralytics version are you using?

goodhsm2000 commented 1 year ago

The same problem occurs when the conf value is 0.1. Ultralytics is 8.0.57. version

mgonzs13 commented 1 year ago

You can try a new version of ultralytics. I have published a new branch (devel) with ultralytics 8.0.149. Btw, I also recommend using Humble vision_msgs package. You can clone the repo in your ros2_ws.

goodhsm2000 commented 1 year ago

Hello. There was a problem that I couldn't detect any objects just like before even using the code from the new branch. While analyzing the code for troubleshooting, I found that the problem occurred in the "self.yolo.to (device)" part. Strangely, in my jetson xavier, the YOLO model was not transferred to "cuda:0" with this code. I erased the "self.yolo.to(device)" part and solved the problem using device="cuda:0" as a argument for the YOLO's "predict" function.

Do you know why this problem happened?

mgonzs13 commented 1 year ago

I have tried using the device in the predict function and it works well, I may change it in the future. Btw, have you checked https://github.com/ultralytics/ultralytics/issues/3557?

goodhsm2000 commented 1 year ago

Yes. What is certain is that when I executed the code currently in the main branch in my laptop environment, it worked without any problems.

I haven't used Deepstream yet, so I haven't read the https://github.com/ultralytics/ultralytics/issues/3557 in detail.

Are you suggesting that I use Deepstream?

mgonzs13 commented 1 year ago

No, I was thinking about the jetpack version since they suggest to update it. Nevertheless, I have changed the use of the device from the model creation to the prediction function as you mentioned before in the devel branch. If you want to try it, I have also created some custom messages as I have received feedback from different sources about the differences in the vision_msgs package from Foxy to Humble. You can also try instance segmentation and hume pose that are available in the devel branch.

goodhsm2000 commented 1 year ago

I see. But I was already using the jetpack 5.1 version.

I checked the code of the new devel branch you uploaded. Thank you very, very much for your quick response and attentive assistance. It was a great help in resolving the issue.

And thank you for sharing the good code.

nypyp commented 3 months ago

Hi, Im still facing the same issue with no detction on Jetson Xavier, with Jetpack 5.1.2 and realsense D435i, I can get the yolo_debug image but cant detect any object, any check or solution could solve this? very appreciate if get your help

nypyp commented 3 months ago

Its torch problem

jetpack 5.1.2 cuda 11.4 torch '2.0.0+nv23.05'

find the cause, in the yolov8_node.py line 30, from torch import cuda, this cause self.yolo.predict return nothing, move it to line 125 that before the clear cuda cache, works fine. dont know why the effect the model output, may be some problem with torch on xavier. hope this could help anyone meet the none detection issue

mgonzs13 commented 3 months ago

@nypyp, I have changed the from torch import cuda for a import torch to use cuda directly from torch. Can you try it?

SongJaeGun commented 3 months ago

Even if you change cuda directly in torch with from torch import cuda, there is no motion in Jetson xavier. self.yolo.predict does not return any information. As @goodhsm2000 pointed out above, deleting "self.yolo.to(device)" and removing device="cuda:0" as the argument to YOLO's "predict" function does not solve the problem.

Jatson xavier nx and Jetpack version 5.1.1. NX currently only supports Jetson pack version 5(version 6 only supports orin series boards), so I can't install ROS2-humble. I need to implement it with ROS2 foxy, is there anyone who can help me?

mgonzs13 commented 2 weeks ago

The last versions of this repo don't support Foxy since it uses lifecycles. You may use an old version. Btw, what about using a docker?