ultralytics / yolov5

YOLOv5 πŸš€ in PyTorch > ONNX > CoreML > TFLite
https://docs.ultralytics.com
GNU Affero General Public License v3.0
51.02k stars 16.41k forks source link

realtime detection on jetson using pi camera #13380

Open Munia-AK opened 4 weeks ago

Munia-AK commented 4 weeks ago

Search before asking

Question

I'm trying to run real time detection on jetson nano using custom fine tuned yolov5s model and pi camera. I already did this using below script which worked: Screenshot from 2024-10-24 11-57-39

However I don't want to use torch.hub for loading the model because it requires internet connection. I need to make it work without internet. I tried in the same code to load the model from it's path without using torch.hub.load, for example: model = 'best.py' and then send each frame as input directly like: results = model(frame) but this didn't work and gave error

I know the solution lies in the detect.py script. I ran the detect.py script with webcam on jetson nano without internet connection and it worked. So, I made two attempts, in first attempt I used same previous code but took the parts of loading the model from detect.py and added them to the code to replace torch.hub.load, like this: Screenshot from 2024-10-24 13-25-05

but this gave this error too that I wasn't able to fix in the end: _File "detect__.py", line 71, in resultimg = result.render()[0] # Render the detection and get the image AttributeError: 'Tensor' object has no attribute 'render

In the second try I was editing the detect.script attempting to add the command that runs the pi camera using GStreamer pipeline. Specifically I edited the following three parts believing that I should edit the webcam sections by making the code runs pi camera instead of webcam when --source 0 is chosen.: after editing:

part 1: def run( weights=ROOT / "yolov5s.pt", # model path or triton URL source = "nvarguscamerasrc ! video/x-raw(memory:NVMM), width=(int)1280, height=(int)720, format=(string)NV12, framerate=(fraction)30/1 ! nvvidconv flip-method=0 ! video/x-raw, width=(int)1280, height=(int)720, format=(string)BGRx ! videoconvert ! video/x-raw, format=(string)BGR ! appsink" . . . part 2: Screenshot from 2024-10-24 11-58-50

part 3: Screenshot from 2024-10-24 12-13-08

but this didn't work and threw an error.

Screenshot from 2024-10-24 13-01-49

I couldn't fix any of the errors of all attempts. So I'm not sure whether I'm following the right path but if this is doable then can you please guide me onto how to make pi camera do the detection without using torch.hub.load

Additional

No response

UltralyticsAssistant commented 4 weeks ago

πŸ‘‹ Hello @Munia-AK, thank you for reaching out with your query on YOLOv5 πŸš€!

It seems you're aiming to run real-time detection on a Jetson Nano using a Pi camera without relying on an internet connection for torch.hub. You're on the right track with modifying detect.py. This kind of setup can indeed be a bit tricky!

Please make sure to provide a minimum reproducible example that can help us debug the situation. This will assist in pinpointing what might be going wrong with the changes you made.

Here are a few steps and resources that might help:

If you're considering alternatives, you might want to check out the newer YOLOv8 model, designed to be fast and efficient. You can install it using:

  pip install ultralytics

An Ultralytics engineer will assist you soon! Meanwhile, please explore our Tutorials for more guidance, including Tips for Best Training Results.

Feel free to share any additional details or screenshots that might aid in diagnosing the issue. Good luck, and we're here to support you! πŸš€

pderrenger commented 1 week ago

@Munia-AK to run YOLOv5 on a Jetson Nano without internet, you can load the model directly using PyTorch. First, ensure your model is saved as a .pt file. Then, load it with torch.load() and perform inference using the model's forward() method. Here's a basic example:

import torch
from models.common import DetectMultiBackend

# Load model
model = DetectMultiBackend('best.pt', device='cuda')  # Adjust path as necessary

# Perform inference
results = model(frame)  # Replace 'frame' with your input data

Ensure your environment is set up with all necessary dependencies from the requirements.txt file. If you encounter specific errors, please share them for further assistance.