ultralytics / yolov5

YOLOv5 ๐Ÿš€ in PyTorch > ONNX > CoreML > TFLite
https://docs.ultralytics.com
GNU Affero General Public License v3.0
51.2k stars 16.43k forks source link

Question: is there a way to apply yolov5 to multiple streaming sources? #1100

Closed hdnh2006 closed 4 years ago

hdnh2006 commented 4 years ago

Hi! I would like to apply yolov5 to multiple cameras, the way I thought is to paste the images and apply the algorithm, but, is there a better way to do it? is ultralytics thinking in this update?

Any suggestion will be welcome. Thanks in advance,

H.

github-actions[bot] commented 4 years ago

Hello @hdnh2006, thank you for your interest in our work! Please visit our Custom Training Tutorial to get started, and see our Jupyter Notebook Open In Colab, Docker Image, and Google Cloud Quickstart Guide for example environments.

If this is a bug report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you.

If this is a custom model or data training question, please note Ultralytics does not provide free personal support. As a leader in vision ML and AI, we do offer professional consulting, from simple expert advice up to delivery of fully customized, end-to-end production solutions for our clients, such as:

For more information please visit https://www.ultralytics.com.

glenn-jocher commented 4 years ago

@hdnh2006 multi-stream capability is already built in, we've created a multithreaded streamloader that feeds detect.py: https://github.com/ultralytics/yolov5/blob/77940c3f42d0f0542d346bfe5fa913f8b0033b5c/utils/datasets.py#L255

To use multiple streams you simply create a text file with the addresses (https, rtsp etc), one per line, and pass it as a source. For 16 simultaneous streams for example:

python detect.py --batch 16 --source streams.txt
hdnh2006 commented 4 years ago

It works! you are awesome guys! thanks for this fantastic tool. It has helped me a lot!!

Thanks @glenn-jocher !

Just to all the people who need it. I set my streams.txt file as following:

http://192.168.0...
rtsp://admin:...
hdnh2006 commented 4 years ago

Maybe this is another question @glenn-jocher, but I cannot see the batch parameter in the detect.py code. Am I wrong?

glenn-jocher commented 4 years ago

@hdnh2006 great, glad it works well!

--batch is an abbreviation of --batch-size. The argparser allows for passing abbreviations of full arguments.

hdnh2006 commented 4 years ago

Yes @glenn-jocher , but it seems this argument is not in detect.py code, just for train, that's why I don't understand good.

glenn-jocher commented 4 years ago

@hdnh2006 ah of course. Yes you are right.

The streamloader automatically composes a batch of the right size, so you don't need to take any action there. If you have 2 streams it will build you a batch-size 2 input automatically. If you have 16 streams it will build a batch size 16 input, etc.

One recommendation here is that you want a dedicated CPU thread per stream, to allow cv2 to decompress the multithreaded streams well. We found this in our own testing.

hdnh2006 commented 4 years ago

Thank you so much again for this fantastic tool you have created @glenn-jocher.

I will close the issue.

imabhijit commented 4 years ago

Hi, is there a way we can send the output to a stream / front-end UI rather than displaying them directly? Thanks!

glenn-jocher commented 4 years ago

@imabhijit can you be more specific?

imabhijit commented 4 years ago

So I have some older code that takes an RTSP stream and pipes the frames using FFmpeg. Next, I have a second FFmpeg process taking those images from the pipe and outputting them as an HLS stream after using yolov4 with cv2.dnn to do some object detection. The HLS stream is then captured and displayed in the front-end. I have trained a yolov5 model and would like to use that to replace the yolov4 model, however, I saw that export to use with dnn is not yet supported. Now, it seems like detect.py already has many of the features that I need such as reading RTSP streams directly and doing image detection on them. So my question is how can I run detect.py so that it outputs frame to a pipe or stream instead of displaying them directly (as it currently does). Thanks :)

glenn-jocher commented 4 years ago

@imabhijit all of the YOLOv5 predictions, regardless of source are available in detect.py as python variables, so I'd assume you might add additional logic of your own to operate on those values as you see fit within the detection for loop:

You can access predictions here: https://github.com/ultralytics/yolov5/blob/c8c5ef36c9a19c7843993ee8d51aebb685467eca/detect.py#L71-L78

imabhijit commented 4 years ago

Ok i see, Thank you!

SamYokai commented 2 years ago

It works! you are awesome guys! thanks for this fantastic tool. It has helped me a lot!!

Thanks @glenn-jocher !

Just to all the people who need it. I set my streams.txt file as following:

http://192.168.0...
rtsp://admin:...

sorry, can I ask that for the first line, how can I get this ip camera? because I had try but still cannot, can i know what is the full example?

hdnh2006 commented 2 years ago

It works! you are awesome guys! thanks for this fantastic tool. It has helped me a lot!! Thanks @glenn-jocher ! Just to all the people who need it. I set my streams.txt file as following:

http://192.168.0...
rtsp://admin:...

sorry, can I ask that for the first line, how can I get this ip camera? because I had try but still cannot, can i know what is the full example?

It is impossible to know how to get the ip of each camera, due to there are many models in the market. You should ask to your manufacter or you can connect to your router an check the ip, but it is impossible to know how to get the ip of your camera.

iamharry-dev commented 2 years ago

@hdnh2006 Thanks for your suggestion i will be very useful for me but can you please post sample streams.txt file with multiple cameras IP

Thanks

glenn-jocher commented 2 years ago

@iamharry-dev it's just one stream address per line, nothing else

Nyi-Zaw-Aung commented 2 years ago

Hi, instead of multiple streams, can I set multiple sources or list of sources in detect.py? I want to make detection on all videos inside my sub directories. Is there a way to do that?

glenn-jocher commented 2 years ago

@Nyi-Zaw-Aung what kind of sources are you looking for?

g-i-o-r-g-i-o commented 2 years ago

sorry, can I ask that for the first line, how can I get this ip camera? because I had try but still cannot, can i know what is the full example?

it doesn't work for me... with or without quotes, I get the same error

File "D:\sbin\yolov5\utils\dataloaders.py", line 339, in init assert cap.isOpened(), f'{st}Failed to open {s}' AssertionError: 1/2: รฏยปยฟhttps://www.youtube.com/watch?v=7aSkJCUDAes... Failed to open รฏยปยฟhttps://www.youtube.com/watch?v=7aSkJCUDAes

joesouaidd commented 2 years ago

Hello, when running multiple sources, the code will not save the results, what can I do?

glenn-jocher commented 2 years ago

@joesouaidd ๐Ÿ‘‹ hi, thanks for letting us know about this possible problem with YOLOv5 ๐Ÿš€. We've created a few short guidelines below to help users provide what we need in order to start investigating a possible problem.

How to create a Minimal, Reproducible Example

When asking a question, people will be better able to provide help if you provide code that they can easily understand and use to reproduce the problem. This is referred to by community members as creating a minimum reproducible example. Your code that reproduces the problem should be:

For Ultralytics to provide assistance your code should also be:

If you believe your problem meets all the above criteria, please close this issue and raise a new one using the ๐Ÿ› Bug Report template with a minimum reproducible example to help us better understand and diagnose your problem.

Thank you! ๐Ÿ˜ƒ

Nyi-Zaw-Aung commented 2 years ago

@Nyi-Zaw-Aung what kind of sources are you looking for?

such as .mkv, .mp4. I have a directory called Folder A-> which include Folder B, Folder C and Folder D. and each of Folder B, C, D contain the videos. instead of multiple streaming devices, I want to use make detection on Folder B, C, D without running each folder individually. I don't want to copy all videos and paste them in single folder. is there a way to do like directory of Folder B, directory of Folder C. something like that.

Thank you ๐Ÿ˜„

glenn-jocher commented 2 years ago

@Nyi-Zaw-Aung yes you can use recursive glob commands to run inference on all subdirectories, i.e.

python detect.py --source path/to/dir/**/*
duy1851999 commented 2 years ago

when i run this code, the screen contineous change between the rtsp links, and it hard to see the overal screen, how i can change this code, thanks

glenn-jocher commented 2 years ago

@duy1851999 you can disable viewing by setting view_img=False: https://github.com/ultralytics/yolov5/blob/e42c89d4efc99bfbd8c5c208ffe67c11632da84a/detect.py#L174-L183

duy1851999 commented 2 years ago

How I can run multi videos, not streams Sir

shubhambagwari commented 2 years ago

@glenn-jocher How we can we do for the multi videos parallelly?

glenn-jocher commented 2 years ago

@shubhambagwari https://github.com/ultralytics/yolov5/issues/1100#issuecomment-705569683

utkarsh-iitbhu commented 1 year ago

@glenn-jocher I want to run my yolov5 model on various CCTV footage simultaneously; I have created my stream.txt file where I have given IP addresses of different webcams. But it is showing me this error.

image

It works fine for any video or image but shows an error for rtsp:..... It is considering the entire path of stream.txt instead of the webcam link. I want to pass the webcam address only to my model.

image

This is my stream.txt; how can I access these links? Do let me know.

glenn-jocher commented 1 year ago

@utkarsh-iitbhu file must be named *.streams

rajeshroy402 commented 1 year ago

@hdnh2006 multi-stream capability is already built in, we've created a multithreaded streamloader that feeds detect.py:

https://github.com/ultralytics/yolov5/blob/77940c3f42d0f0542d346bfe5fa913f8b0033b5c/utils/datasets.py#L255

To use multiple streams you simply create a text file with the addresses (https, rtsp etc), one per line, and pass it as a source. For 16 simultaneous streams for example:

python detect.py --batch 16 --source streams.txt

So first of all, I get detect.py: error: unrecognized arguments: --batch 2 when I run python3 detect.py --batch 2 --source .streams where .streams has 2 URIs (local video files)

So I tried running without --batch and it started running but after processing few frames, it throws this error -

Fusing layers... YOLOv5s summary: 213 layers, 7225885 parameters, 0 gradients 1/2: /home/rajesh/Videos/sample_720p.mp4... Success (1442 frames 1280x720 at 30.00 FPS) 2/2: /home/rajesh/Videos/sample_720p.mp4... Success (1442 frames 1280x720 at 30.00 FPS)

0: 384x640 4 persons, 9 cars, 1: 384x640 4 persons, 12 cars, 367.7ms 0: 384x640 3 persons, 6 cars, 1 bus, 1: 384x640 4 persons, 6 cars, 1 truck, 13.5ms 0: 384x640 4 persons, 10 cars, 1: 384x640 3 persons, 11 cars, 1 truck, 11.2ms 0: 384x640 5 persons, 13 cars, 1: 384x640 1 person, 9 cars, 9.6ms 0: 384x640 3 persons, 9 cars, 1: 384x640 4 persons, 11 cars, 13.4ms 0: 384x640 3 persons, 7 cars, 2 trucks, 1: 384x640 3 persons, 11 cars, 11.9ms 0: 384x640 4 persons, 7 cars, 1: 384x640 3 persons, 6 cars, 1 truck, 10.0ms 0: 384x640 5 persons, 11 cars, 1: 384x640 2 persons, 9 cars, 1 truck, 11.6ms 0: 384x640 3 persons, 8 cars, 1: 384x640 4 persons, 9 cars, 1 truck, 10.7ms WARNING โš ๏ธ Video stream unresponsive, please check your IP camera connection. 0: 384x640 2 persons, 8 cars, 1: 384x640 2 persons, 11 cars, 2 trucks, 11.5ms WARNING โš ๏ธ Video stream unresponsive, please check your IP camera connection. 0: 384x640 1: 384x640 2 persons, 8 cars, 9.1ms Speed: 0.3ms pre-process, 21.8ms inference, 1.3ms NMS per image at shape (1, 3, 640, 640) Results saved to runs/detect/exp12

I have tried --batch-size as well but that didn't work as well. I tried with two different videos, didn't workout. Then I tried with two https URI and that didn't work out as well.

hdnh2006 commented 1 year ago

@rajeshroy402 You don't need to add --batch-size or --batch as a parameter because batch size is assigned automatically as is shown here: https://github.com/ultralytics/yolov5/blob/65071da7181e2ede9d3514f20c88e6bd646af07c/detect.py#L107

From several commits ago, there's a big difference between .streams and .txt files. The first one will consider each line as rtsp or http cameras, and the second one is used to label several images or videos at the same time. Read again the documentation.

Mps24-7uk commented 1 year ago

@glenn-jocher I have 5 videos in the folder. Is it possible to run the multiple videos parallelly (not one at time).

traumasv commented 1 year ago

@glenn-jocher I have 5 videos in the folder. Is it possible to run the multiple videos parallelly (not one at time).

Hey @Mps24-7uk, as Glenn mentioned, the model support multithreading out-of-box. So you can just open multiple threads along with multiple opencv VidCapture objects for each video then use the same model to inference multiple videos simultaneously. In your case, it would be 5 threads.

iakram123 commented 1 year ago

@utkarsh-iitbhu file must be named *.streams

@utkarsh-iitbhu file must be named *.streams

hello, i would like to apply yolov5 to multiple cameras and i have .txt file which contain two urls for the streams. the question is how do i convert it to a .streams file or how to create .streams file with those urls

Rich2020 commented 1 year ago

@iakram123 Have you tried using just a text file, e.g. streams.txt?

iakram123 commented 1 year ago

@iakram123 Have you tried using just a text file, e.g. streams.txt? yes i did but it didn't work , it has to be .stream file

Rich2020 commented 1 year ago

@iakram123 OK, just open a text editor (notepad, notepad++, sublime, etc.) add your streams (1 per line) and the save the file as e.g., mystreams.streams.

Note the extension must be .streams

nguyenminhbntt1 commented 1 year ago

@glenn-jocher I was able to run 2 webcams simultaneously for object recognition by creating stream.txt. sir please show me how to get the coordinates of each bouding box in each webcam, i need it to control my motor. thank you very much sir

glenn-jocher commented 1 year ago

@nguyenminhbntt1 hello! Glad to know that you were able to run the webcams successfully. Regarding obtaining the coordinates of the bounding boxes in each webcam, you can access them through the output of the detection script. The output is a list containing a dictionary for each image/frame input. These dictionaries contain information such as the detected class and the coordinates of the bounding box. You can extract this information for each detection and use it to control your motor. Hope this helps! Let me know if you have any further questions.

nguyenminhbntt1 commented 1 year ago

@glenn-jocher But I can't distinguish which is the bounding box of wecam1, which is the bounding box of webcam2. both of my webcams need to be controlled by separate motor assemblies based on bounding box coordinates. I'm a newbie so there are many mistakes, can you guide me in more detail?

glenn-jocher commented 1 year ago

@nguyenminhbntt1 If you are detecting objects from multiple webcams, you can differentiate the bounding boxes by checking the ID of the camera or image source. You can add an argument in the detect script to pass the ID for each camera, then add this ID to the dictionary containing the bounding box information. You can access this ID later to match it with the motor assembly that the bounding box coordinates correspond to. If you need more detailed guidance, feel free to provide more information about your setup or ask specific questions.

Abduqayyum commented 1 year ago

@glenn-jocher hi thanks for your advice. I have managed to make yolov5 model work on mutiple sources. But i used to pass single source and pass that source as a new argument to Annotator class when running yolov5 in detect.py file like this annotator = Annotator(im0, line_width=line_thickness, example=str(names), url=url, camera=camera_source). Because in Annotator class i made some changes so that i can send detected data using requests module. But after i passed multiple sources in streams.txt file i am not able to pass each source separately so that i can send data in which source it was detected. Can you help me with that

glenn-jocher commented 1 year ago

@Abduqayyum It's great to hear about your progress with YOLOv5! To handle multiple sources and label each detection with its appropriate source, you can modify the annotator setup to include the source information. You might consider passing the source ID along with the frame to the Annotator class, or modify the Annotator class to handle multiple sources and their respective detections separately. This can help you accurately label each detection with its source and send the data accordingly. Feel free to provide more details or ask for specific help. Good luck with your project!

Abduqayyum commented 1 year ago

@glenn-jocher Thanks for giving me useful information. But thankfully i have managed to resolve that problem. When i pass txt file i listed all the source available in that file and I figured out how to get camera id. In detect.py file we iterate through each prediction by default. for i, det in enumerate(pred): # per image seen += 1 if webcam: # batch_size >= 1 p, im0, frame = path[i], im0s[i].copy(), dataset.count s += f'{i}: ' else: p, im0, frame = path, im0s.copy(), getattr(dataset, 'frame', 0) print(i, "Source id") ...

here i get source id as we used enumarate function. Once i got source id i knew which source detected object belongs to through list i declared. Then i passed that source to Annotator class as i did when i pass single source. This is how you know which source detected object belongs to.

glenn-jocher commented 1 year ago

@Abduqayyum That's fantastic to hear! It's great that you were able to resolve the issue and successfully identify which source each detection belongs to by leveraging the source ID from the enumerate function. By passing the source to the Annotator class, you can accurately track the source of each detection. Your solution will definitely be helpful for others facing similar challenges. Good luck with your project, and feel free to reach out if you have more questions in the future!

b4u365 commented 1 year ago

Hi Team,

Thanks for your Great Efforts,

I am doing multi streaming with yoloV5 and using this command python detect_ir_vi.py --source streams.txt. Here yolov5 takes two images as inputs from two different cameras capturing same scene, here the images are aligned (object will be at same postion on both the images). Can we run the NMS on both the images simultaneous and assign or plot the biggest confidence score on both, Meaning both the images have same highest confidence score.

Can you help in this.

Your help will be of great value to me.

Thanks and Regards, Bharath.

glenn-jocher commented 1 year ago

@b4u365 thank you for the kind words! YOLOv5 currently treats each input image independently, and simultaneous detection is not supported out of the box. You can fuse the detections from both images externally, selecting the highest confidence score for the same object, or modify the YOLOv5 code to support simultaneous detection. However, architecture changes require deep understanding. If you need help with customizing YOLOv5, refer to the Ultralytics Docs for guidance. Good luck with your project!