Open Munia-AK opened 4 weeks ago
π Hello @Munia-AK, thank you for reaching out with your query on YOLOv5 π!
It seems you're aiming to run real-time detection on a Jetson Nano using a Pi camera without relying on an internet connection for torch.hub. You're on the right track with modifying detect.py
. This kind of setup can indeed be a bit tricky!
Please make sure to provide a minimum reproducible example that can help us debug the situation. This will assist in pinpointing what might be going wrong with the changes you made.
Here are a few steps and resources that might help:
Ensure you have set up your environment correctly. YOLOv5 requires Python>=3.8.0 and requirements.txt installed, with PyTorch>=1.8 installed. You can set up the environment with:
git clone https://github.com/ultralytics/yolov5 # clone
cd yolov5
pip install -r requirements.txt # install
Since you're working with a Jetson Nano, ensure your GStreamer pipeline for the Pi camera is correctly configured. Double-check the syntax and compatibility of your GStreamer string.
If you are using CUDA on your Jetson Nano, ensure it's properly installed and functional to leverage GPU acceleration.
If you're considering alternatives, you might want to check out the newer YOLOv8 model, designed to be fast and efficient. You can install it using:
pip install ultralytics
An Ultralytics engineer will assist you soon! Meanwhile, please explore our Tutorials for more guidance, including Tips for Best Training Results.
Feel free to share any additional details or screenshots that might aid in diagnosing the issue. Good luck, and we're here to support you! π
@Munia-AK to run YOLOv5 on a Jetson Nano without internet, you can load the model directly using PyTorch. First, ensure your model is saved as a .pt
file. Then, load it with torch.load()
and perform inference using the model's forward()
method. Here's a basic example:
import torch
from models.common import DetectMultiBackend
# Load model
model = DetectMultiBackend('best.pt', device='cuda') # Adjust path as necessary
# Perform inference
results = model(frame) # Replace 'frame' with your input data
Ensure your environment is set up with all necessary dependencies from the requirements.txt
file. If you encounter specific errors, please share them for further assistance.
Search before asking
Question
I'm trying to run real time detection on jetson nano using custom fine tuned yolov5s model and pi camera. I already did this using below script which worked:
However I don't want to use torch.hub for loading the model because it requires internet connection. I need to make it work without internet. I tried in the same code to load the model from it's path without using torch.hub.load, for example: model = 'best.py' and then send each frame as input directly like: results = model(frame) but this didn't work and gave error
I know the solution lies in the detect.py script. I ran the detect.py script with webcam on jetson nano without internet connection and it worked. So, I made two attempts, in first attempt I used same previous code but took the parts of loading the model from detect.py and added them to the code to replace torch.hub.load, like this:
but this gave this error too that I wasn't able to fix in the end: _File "detect__.py", line 71, in
resultimg = result.render()[0] # Render the detection and get the image
AttributeError: 'Tensor' object has no attribute 'render
In the second try I was editing the detect.script attempting to add the command that runs the pi camera using GStreamer pipeline. Specifically I edited the following three parts believing that I should edit the webcam sections by making the code runs pi camera instead of webcam when --source 0 is chosen.: after editing:
part 1: def run( weights=ROOT / "yolov5s.pt", # model path or triton URL source = "nvarguscamerasrc ! video/x-raw(memory:NVMM), width=(int)1280, height=(int)720, format=(string)NV12, framerate=(fraction)30/1 ! nvvidconv flip-method=0 ! video/x-raw, width=(int)1280, height=(int)720, format=(string)BGRx ! videoconvert ! video/x-raw, format=(string)BGR ! appsink" . . . part 2:
part 3:
but this didn't work and threw an error.
I couldn't fix any of the errors of all attempts. So I'm not sure whether I'm following the right path but if this is doable then can you please guide me onto how to make pi camera do the detection without using torch.hub.load
Additional
No response