Closed medic-lab closed 2 years ago
👋 Hello @medic-lab, thank you for your interest in YOLOv5 🚀! Please visit our ⭐️ Tutorials to get started, where you can find quickstart guides for simple tasks like Custom Data Training all the way to advanced concepts like Hyperparameter Evolution.
If this is a 🐛 Bug Report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you.
If this is a custom training ❓ Question, please provide as much information as possible, including dataset images, training logs, screenshots, and a public link to online W&B logging if available.
For business inquiries or professional support requests please visit https://ultralytics.com or email support@ultralytics.com.
Python>=3.7.0 with all requirements.txt installed including PyTorch>=1.7. To get started:
git clone https://github.com/ultralytics/yolov5 # clone
cd yolov5
pip install -r requirements.txt # install
YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):
If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training (train.py), validation (val.py), inference (detect.py) and export (export.py) on macOS, Windows, and Ubuntu every 24 hours and on every commit.
@medic-lab connected webcams are generally accessed via their indices, i.e.
python detect.py --source 0
python detect.py --source 1
python detect.py --source 2
...
Well, my camera is not considered as a webcam. It's not connected via USB so --source 0
or above doesn't work for me. CSI cameras have different port but i don't know how to define it on YoloV5.
I can only run the camera with the code below on terminal:
gst-launch-1.0 nvarguscamerasrc sensor_id=0 ! 'video/x-raw(memory:NVMM),width=3820, height=2464, framerate=21/1, format=NV12' ! nvvidconv flip-method=0 ! 'video/x-raw,width=960, height=616' ! nvvidconv ! nvegltransform ! nveglglessink -e
I'm sorry, I don't have experience with those cameras.
Any information on how to use ip cameras as source?
Well, my camera is not considered as a webcam. It's not connected via USB so
--source 0
or above doesn't work for me. CSI cameras have different port but i don't know how to define it on YoloV5.I can only run the camera with the code below on terminal:
gst-launch-1.0 nvarguscamerasrc sensor_id=0 ! 'video/x-raw(memory:NVMM),width=3820, height=2464, framerate=21/1, format=NV12' ! nvvidconv flip-method=0 ! 'video/x-raw,width=960, height=616' ! nvvidconv ! nvegltransform ! nveglglessink -e
@medic-lab To use CSI camera on jetson, follow it. I tested it on jetson nano 4GB and its working smooth.
Any information on how to use ip cameras as source?
@ToshiNandanReddy It accepts url as source with any of the protocol from the four mentioned here.
@sriramreddyM
I have an IP address not a url...
@sriramreddyM
I have an IP address not a url...
and what is the streaming protocol? just use ip address as url, it should work if the streaming protocol is correctly mentioned .
@sriramreddyM
I have an IP address not a url...
and what is the streaming protocol? just use ip address as url, it should work if the streaming protocol is correctly mentioned .
it has both http and rtsp...
@sriramreddyM
I have an IP address not a url...
and what is the streaming protocol? just use ip address as url, it should work if the streaming protocol is correctly mentioned .
it has both http and rtsp...
Did you try using ip as uri, if so isn't it working? and are you using Gstreamer on jetson for streaming over ip?
Well, my camera is not considered as a webcam. It's not connected via USB so
--source 0
or above doesn't work for me. CSI cameras have different port but i don't know how to define it on YoloV5. I can only run the camera with the code below on terminal:gst-launch-1.0 nvarguscamerasrc sensor_id=0 ! 'video/x-raw(memory:NVMM),width=3820, height=2464, framerate=21/1, format=NV12' ! nvvidconv flip-method=0 ! 'video/x-raw,width=960, height=616' ! nvvidconv ! nvegltransform ! nveglglessink -e
@medic-lab To use CSI camera on jetson, follow it. I tested it on jetson nano 4GB and its working smooth.
I can use CSI camera but i can't use it on Yolov5 as a source
👋 Hello, this issue has been automatically marked as stale because it has not had recent activity. Please note it will be closed if no further activity occurs.
Access additional YOLOv5 🚀 resources:
Access additional Ultralytics ⚡ resources:
Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!
Thank you for your contributions to YOLOv5 🚀 and Vision AI ⭐!
@medic-lab Hey, I am facing a similar difficulty, were you able to integrate the csi camera into the yolov5? If yes, then please help me understand how we can do it @sriramreddyM I did follow the jetson-hack-csi-camera module that you had linked in the previous comments, but I am unable to figure out how to insert the code in the yolov5 detect file, if you have done it, can you kindly explain how?
Thank you!
@Rusali28 if you have to use "detect.py" file you have write a custom method in dataloaders to read frames from CSI camera. Instead I wrote own method for detection, following the same method as "jetson-hack-csi-camera" to read frames from CSI camera using opencv. Please find the sample code for your reference.
@sriramreddyM Thank you for sharing the sample code! I went through it and tried to understand. However, since I am using a custom made dataset, I need to use the detect.py file to perform the accurate yolov5 predictions. I figured that we have to change the dataloaders file to read frames from the CSI camera, but I am unable to figure out how to do it. Can you please help me understand how we can make the custom method in dataloaders to read the csi-camera? Are there any sample references for those? Or can we modify the current LoadStreams class in the dataloaders file so that it can read the frames from csi-camera? Please do let me know, thank you for your help!
yolov5 detections will remain the same if you write your own inference script, as long as you use the same weights and follow the same pre-processing steps as in the dataloaders. This approach is easier and allows for more customization with the detection results.
If you insist to use the detect.py file, you will need to modify the LoadStreams function to add support for CSI cameras. In LoadStreams, the usual case for web cameras is to use the sensor ID from arguments will be used at cv2.VideoCapture(s), but for CSI cameras, you should use a string as shown here. You may also need to add an argument (--csi_cam) to detect.py to pass to LoadStreams to distinguish between usb-cam and csi-cam.
Please note that OpenCV must be built with gStreamer support to use the CSI camera. I have tested using CSI camera and yolov5 together, only with the Nvidia Jetson Nano.
@sriramreddyM I understand your suggestion on writing our own inference script, however since I am relatively new to computer vision, I am a little unsure of how to write an inference script for yolov5 on my own. Can you help me understand if there are ways or resources to do it?
Also, since I am nearing a deadline, I was trying to use the detect.py as I have already modified it a bit to print results. But I do want to try both the ways (using current detect.py and the inference script). As of now, I am trying to modify the LoadStreams function - I made the changes you suggested, but the detect file doesnt seem to run. Can you please have a look at the changes I made? Am I missing something here?
Yes, I am also trying to test the CSI Camera and yolov5 together, on my Nvidia Jetson Nano (4 GB). But I have been stuck on this problem for a few days now, can you please help me understand if I should try any other approach? [I also tried running NVIDIA deepstream for the code, but that failed due to system constraints]
Thankyou! files.zip
@sriramreddyM Can you please have a look and help me figure this out?
@Rusali28 Can you provide error log when running "detect.py". Did you check if the openCV is built with gStreamer support? run print(cv2.getBuildInformation())
Here are the steps you can follow to use the CSI camera with OpenCV for yolov5 detection:
First, check if the camera is detected and working by running nvgstcapture-1.0. This should capture from the camera and preview the output on the display.
Next, make sure OpenCV is built with gStreamer support. If it isn't, you'll need to uninstall OpenCV and rebuild it with gStreamer support. You can follow these instructions to do this.
Once OpenCV is set up correctly, you can use the following example code to test the CSI camera and perform detection with a loaded model:
import cv2
import torch
model = torch.hub.load('ultralytics/yolov5', 'yolo5s')
#To load a model from a local path, use the following line instead:
#model = torch.hub.load('path_to_/yolov5', 'custom', 'path_to_weights/yolov5s.pt', source='local', force_reload=True)
cap = cv2.VideoCapture("nvarguscamerasrc ! video/x-raw(memory:NVMM), width=(int)1280, height=(int)720, format=(string)NV12, framerate=(fraction)30/1 ! nvvidconv flip-method=2 ! video/x-raw, width=(int)1280, height=(int)720, format=(string)BGRx ! videoconvert ! video/x-raw, format=(string)BGR ! appsink", cv2.CAP_GSTREAMER)
while True:
ret, frame = cap.read()
if ret:
results = model(frame, size=640)
results.print()
else:
print("No frame")
break
cap.release()
Edit: in case of error Illegal instruction(core dumped), add export OPENBLAS_CORETYPE=ARMV8
to bashrc file or start python with OPENBLAS_CORETYPE=ARMV8 python3
. If you are facing any other errors, follow these to prepare your jetson and install dependencies.
Hello @sriramreddyM , As suggested I ran the print statement on python3.9 and this is what it prints `Python 3.9.16 (main, Dec 7 2022, 01:11:58) [GCC 7.5.0] on linux Type "help", "copyright", "credits" or "license" for more information.
import cv2 print(cv2.getBuildInformation())
General configuration for OpenCV 4.7.0 ===================================== Version control: 4.7.0-dirty
Platform: Timestamp: 2022-12-29T19:13:29Z Host: Linux 5.3.0-28-generic aarch64 CMake: 3.25.0 CMake generator: Unix Makefiles CMake build tool: /bin/gmake Configuration: Release
CPU/HW features: Baseline: NEON FP16
C/C++:
Built as dynamic libs?: NO
C++ standard: 11
C++ Compiler: /opt/rh/devtoolset-10/root/usr/bin/c++ (ver 10.2.1)
C++ flags (Release): -Wl,-strip-all -fsigned-char -W -Wall -Wreturn-type -Wnon-virtual-dtor -Waddress -Wsequence-point -Wformat -Wformat-security -Wmissing-declarations -Wundef -Winit-self -Wpointer-arith -Wshadow -Wsign-promo -Wuninitialized -Wsuggest-override -Wno-delete-non-virtual-dtor -Wno-comment -Wimplicit-fallthrough=3 -Wno-strict-overflow -fdiagnostics-show-option -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -fvisibility=hidden -fvisibility-inlines-hidden -O3 -DNDEBUG -DNDEBUG
C++ flags (Debug): -Wl,-strip-all -fsigned-char -W -Wall -Wreturn-type -Wnon-virtual-dtor -Waddress -Wsequence-point -Wformat -Wformat-security -Wmissing-declarations -Wundef -Winit-self -Wpointer-arith -Wshadow -Wsign-promo -Wuninitialized -Wsuggest-override -Wno-delete-non-virtual-dtor -Wno-comment -Wimplicit-fallthrough=3 -Wno-strict-overflow -fdiagnostics-show-option -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -fvisibility=hidden -fvisibility-inlines-hidden -g -O0 -DDEBUG -D_DEBUG
C Compiler: /opt/rh/devtoolset-10/root/usr/bin/cc
C flags (Release): -Wl,-strip-all -fsigned-char -W -Wall -Wreturn-type -Waddress -Wsequence-point -Wformat -Wformat-security -Wmissing-declarations -Wmissing-prototypes -Wstrict-prototypes -Wundef -Winit-self -Wpointer-arith -Wshadow -Wuninitialized -Wno-comment -Wimplicit-fallthrough=3 -Wno-strict-overflow -fdiagnostics-show-option -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -fvisibility=hidden -O3 -DNDEBUG -DNDEBUG
C flags (Debug): -Wl,-strip-all -fsigned-char -W -Wall -Wreturn-type -Waddress -Wsequence-point -Wformat -Wformat-security -Wmissing-declarations -Wmissing-prototypes -Wstrict-prototypes -Wundef -Winit-self -Wpointer-arith -Wshadow -Wuninitialized -Wno-comment -Wimplicit-fallthrough=3 -Wno-strict-overflow -fdiagnostics-show-option -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -fvisibility=hidden -g -O0 -DDEBUG -D_DEBUG
Linker flags (Release): -L/ffmpeg_build/lib -Wl,--gc-sections -Wl,--as-needed -Wl,--no-undefined
Linker flags (Debug): -L/ffmpeg_build/lib -Wl,--gc-sections -Wl,--as-needed -Wl,--no-undefined
ccache: YES
Precompiled headers: NO
Extra dependencies: /lib64/libopenblas.so Qt5::Core Qt5::Gui Qt5::Widgets Qt5::Test Qt5::Concurrent /usr/local/lib/libpng.so /usr/local/lib/libz.so dl m pthread rt
3rdparty dependencies: libprotobuf ade ittnotify libjpeg-turbo libwebp libtiff libopenjp2 IlmImf quirc tegra_hal
OpenCV modules: To be built: calib3d core dnn features2d flann gapi highgui imgcodecs imgproc ml objdetect photo python3 stitching video videoio Disabled: world Disabled by dependency: - Unavailable: java python2 ts Applications: - Documentation: NO Non-free algorithms: NO
GUI: QT5 QT: YES (ver 5.15.0 ) QT OpenGL support: NO GTK+: NO VTK support: NO
Media I/O: ZLib: /usr/local/lib/libz.so (ver 1.2.13) JPEG: libjpeg-turbo (ver 2.1.3-62) WEBP: build (ver encoder: 0x020f) PNG: /usr/local/lib/libpng.so (ver 1.6.37) TIFF: build (ver 42 - 4.2.0) JPEG 2000: build (ver 2.4.0) OpenEXR: build (ver 2.3.0) HDR: YES SUNRASTER: YES PXM: YES PFM: YES
Video I/O: DC1394: NO FFMPEG: YES avcodec: YES (59.37.100) avformat: YES (59.27.100) avutil: YES (57.28.100) swscale: YES (6.7.100) avresample: NO GStreamer: NO v4l/v4l2: YES (linux/videodev2.h)
Parallel framework: pthreads
Trace: YES (with Intel ITT)
Other third-party libraries: Lapack: YES (/lib64/libopenblas.so) Eigen: NO Custom HAL: YES (carotene (ver 0.0.1)) Protobuf: build (3.19.1)
OpenCL: YES (no extra features) Include path: /io/opencv/3rdparty/include/opencl/1.2 Link libraries: Dynamic load
Python 3: Interpreter: /opt/python/cp37-cp37m/bin/python3.7 (ver 3.7.16) Libraries: libpython3.7m.a (ver 3.7.16) numpy: /home/ci/.local/lib/python3.7/site-packages/numpy/core/include (ver 1.19.3) install path: python/cv2/python-3
Python (for build): /bin/python2.7
Java:
ant: NO
JNI: NO
Java wrappers: NO
Java tests: NO
Install to: /io/_skbuild/linux-aarch64-3.7/cmake-install
`
So I think gstreamer is not built in this python3.9 version and opencv. Due to this I am unable to open the camera on python3.9. However, when I run the simple_camera.py file from the jetson-hacks repo, on python3.6 version it works fine. But my issue is that my yolov5 copydetect.py code has a few modifications in the --save-txt section that I want to print when the predictions are made. So it is necessary for me to run the copydetect.py file which only runs on python3.9 due to various dependency issues. I will try to install opencv with gstreamer support like you suggested in step2, but I am afraid it might again lead to a version dependency problem and then my copydetect.py file may fail to run.
Also can you suggest if we can make customised changes to the results printed by torch.hub.load?
Currently, this is the error log I face when I run copydetect.py on python3.9
nvidia@nvidia-desktop:~/camrusali/yolov5$ python3.9 copydetect.py --weights best.pt --source csi-cam /home/nvidia/.local/lib/python3.9/site-packages/torchvision/io/image.py:13: UserWarning: Failed to load image Python extension: warn(f"Failed to load image Python extension: {e}") copydetect: csi_cam=False, weights=['best.pt'], source=csi-cam, data=data/coco128.yaml, imgsz=[640, 640], conf_thres=0.25, iou_thres=0.45, max_det=1000, device=, view_img=False, save_txt=False, save_conf=False, save_crop=False, nosave=False, classes=None, agnostic_nms=False, augment=False, visualize=False, update=False, project=runs/detect, name=exp, exist_ok=False, line_thickness=3, hide_labels=False, hide_conf=False, half=False, dnn=False, vid_stride=1 Traceback (most recent call last): File "/home/nvidia/camrusali/yolov5/copydetect.py", line 285, in <module> main(opt) File "/home/nvidia/camrusali/yolov5/copydetect.py", line 280, in main run(**vars(opt)) File "/home/nvidia/.local/lib/python3.9/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context return func(*args, **kwargs) TypeError: run() got an unexpected keyword argument 'csi_cam'
Thank you very much!
@Rusali28, if you're able to run the code I suggested without any errors using Python 3.6, then I recommend using it or modifying it to fit your requirements. You can use the following (example) commands to work with the detection results:
results.print()
results.show()
results.save()
results.xyxy[0]
If you still prefer to use detect.py, you'll need to work on fixing the dependencies. However, the error you copied doesn't seem to be related to YOLOv5 or the CSI camera. It's likely caused by an issue with the way you're passing arguments to the script.
@sriramreddyM I tried the steps you suggested, and after a bunch of issues and errors, I am finally able to run my model using the camera successfully! Thank you so much for your support and help! I am very grateful!
Here are the steps you can follow to use the CSI camera with OpenCV for yolov5 detection:
- First, check if the camera is detected and working by running nvgstcapture-1.0. This should capture from the camera and preview the output on the display.
- Next, make sure OpenCV is built with gStreamer support. If it isn't, you'll need to uninstall OpenCV and rebuild it with gStreamer support. You can follow these instructions to do this.
- Once OpenCV is set up correctly, you can use the following example code to test the CSI camera and perform detection with a loaded model:
import cv2 import torch model = torch.hub.load('ultralytics/yolov5', 'yolo5s') #To load a model from a local path, use the following line instead: #model = torch.hub.load('path_to_/yolov5', 'custom', 'path_to_weights/yolov5s.pt', source='local', force_reload=True) cap = cv2.VideoCapture("nvarguscamerasrc ! video/x-raw(memory:NVMM), width=(int)1280, height=(int)720, format=(string)NV12, framerate=(fraction)30/1 ! nvvidconv flip-method=2 ! video/x-raw, width=(int)1280, height=(int)720, format=(string)BGRx ! videoconvert ! video/x-raw, format=(string)BGR ! appsink", cv2.CAP_GSTREAMER) while True: ret, frame = cap.read() if ret: results = model(frame, size=640) results.print() else: print("No frame") break cap.release()
Edit: in case of error Illegal instruction(core dumped), add
export OPENBLAS_CORETYPE=ARMV8
to bashrc file or start python withOPENBLAS_CORETYPE=ARMV8 python3
. If you are facing any other errors, follow these to prepare your jetson and install dependencies.
I just registered an account to say thanks, for those who want to use CSI as input, you can replace "cap = cv2.VideoCapture(s)" into cap = cv2.VideoCapture("nvarguscamerasrc ! video/x-raw(memory:NVMM), width=(int)1280, height=(int)720, format=(string)NV12, framerate=(fraction)30/1 ! nvvidconv flip-method=2 ! video/x-raw, width=(int)1280, height=(int)720, format=(string)BGRx ! videoconvert ! video/x-raw, format=(string)BGR ! appsink", cv2.CAP_GSTREAMER) in dataloader.py then it works.
@General-kirby thank you for sharing your experience and solution with us! It's great to hear that the suggested steps worked for you. Your suggestion to replace the cap
line in dataloader.py will be helpful for others who want to use CSI as input. We appreciate your contribution to the YOLOv5 community. If you have any further questions or feedback, feel free to share it with us.
@sriramreddyM @Rusali28 @General-kirby
At first, I am not friendly with english, plz understand me if my english grammar was too bad
Anyway, I also tried to use CSI-Cam on yolov5 It's okay when I use CSI-Cam only without yolov5. I wanna use IMX219-200 Camera, 200 FOV Applicable for Jetson Nano
My GStreamer was always "No" whether I installed GStreamer or not on python3.9 I tried to install Opencv again and again, It was always "No" And I tried to change code from "cap = cv2.VideoCapture(s)" to cap = "cv2.VideoCapture(~~~~"
Eventhough I saw your saying and tried to follow them, but I failed :(
So, could you give me some advice for me?
My environments are
JetsonNano Ubuntu 18.04 Opencv 4.5.3 Pytorch 2.0.1 pip 21.3.1 Cuda 10.2.300 Cudnn 8.2.1.32
@zijeon
You have to compile opencv from source to enable gstreamer. Follow these steps to install opencv with streamer support.
@sriramreddyM
To enable GStreamer support in OpenCV, you will need to compile OpenCV from source. You can follow the steps mentioned in this documentation to install OpenCV with GStreamer support on your Jetson Nano: [Link: steps-to-install-opencv-with-gstreamer-support].
Once you have successfully compiled and installed OpenCV with GStreamer support, you should be able to use the CSI camera with YOLOv5.
Let me know if you have any further questions or issues.
Search before asking
Question
Hello, I can run CSI camera on Jetson Xavier NX without a specific purpose but when i trying to run the camera with YoloV5. I'm having issues because the camera is not defined in YoloV5.
Additional
No response