ultralytics / yolov5

YOLOv5 🚀 in PyTorch > ONNX > CoreML > TFLite
https://docs.ultralytics.com
GNU Affero General Public License v3.0
49.7k stars 16.12k forks source link

How to use multiple models for detection at the same time? #4673

Closed shizhanhao closed 2 years ago

shizhanhao commented 3 years ago

Hello, Dear developers. Can you tell me how to merge models trained from multiple datasets together? Let's say I trained a model with a traffic light dataset, another model with a car dataset, and another model with a pedestrian dataset. So how do I reason with all three models at the same time to achieve recognition of traffic lights, cars, and pedestrians?

github-actions[bot] commented 3 years ago

👋 Hello @shizhanhao, thank you for your interest in YOLOv5 🚀! Please visit our ⭐️ Tutorials to get started, where you can find quickstart guides for simple tasks like Custom Data Training all the way to advanced concepts like Hyperparameter Evolution.

If this is a 🐛 Bug Report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you.

If this is a custom training ❓ Question, please provide as much information as possible, including dataset images, training logs, screenshots, and a public link to online W&B logging if available.

For business inquiries or professional support requests please visit https://ultralytics.com or email Glenn Jocher at glenn.jocher@ultralytics.com.

Requirements

Python>=3.6.0 with all requirements.txt installed including PyTorch>=1.7. To get started:

$ git clone https://github.com/ultralytics/yolov5
$ cd yolov5
$ pip install -r requirements.txt

Environments

YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

CI CPU testing

If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training (train.py), validation (val.py), inference (detect.py) and export (export.py) on MacOS, Windows, and Ubuntu every 24 hours and on every commit.

kinoute commented 3 years ago

Why not merge these models/datasets into one?

shizhanhao commented 3 years ago

Why not merge these models/datasets into one? First of all, thank you for your reply, because each dataset has tens of thousands of photos, and there is only one category of annotation in each dataset. For example, most of the photos in the vehicle dataset have traffic lights on them, but they are not labeled, so if multiple data sets are directly combined together, it will affect the map value, and the generalization of the model trained in this way is not good.

kinoute commented 3 years ago

Could you not use your traffic lights model to generate annotations files for the vehicle dataset? (to annotate the traffic lights)

That's what I did. I trained a small model made quickly to pseudo-annotate the rest of my dataset. I put my model as an API service, got the predictions/normalized coordinates, and generated the annotation text files from the result, for every image.

After that, you just have to double-check if the predicted coordinates are correct (to remove some false positive/negative), but it's quick. I checked 10k images in around 2h.

shizhanhao commented 3 years ago

Could you not use your traffic lights model to generate annotations files for the vehicle dataset? (to annotate the traffic lights)

That's what I did. I trained a small model made quickly to pseudo-annotate the rest of my dataset. I put my model as an API service, got the predictions/normalized coordinates, and generated the annotation text files from the result, for every image.

After that, you just have to double-check if the predicted coordinates are correct (to remove some false positive/negative), but it's quick. I checked 10k images in around 2h.

Could you not use your traffic lights model to generate annotations files for the vehicle dataset? (to annotate the traffic lights)

That's what I did. I trained a small model made quickly to pseudo-annotate the rest of my dataset. I put my model as an API service, got the predictions/normalized coordinates, and generated the annotation text files from the result, for every image.

After that, you just have to double-check if the predicted coordinates are correct (to remove some false positive/negative), but it's quick. I checked 10k images in around 2h.

Could you not use your traffic lights model to generate annotations files for the vehicle dataset? (to annotate the traffic lights)

That's what I did. I trained a small model made quickly to pseudo-annotate the rest of my dataset. I put my model as an API service, got the predictions/normalized coordinates, and generated the annotation text files from the result, for every image.

After that, you just have to double-check if the predicted coordinates are correct (to remove some false positive/negative), but it's quick. I checked 10k images in around 2h.

This method can indeed be used, but the workload is still a bit large, thank you for your reply and support.

sekisek commented 2 years ago

I have the same questions but in a different scenario. Is it possible to load multiple models to the same page (detect.py) ???

glenn-jocher commented 2 years ago

@sekisek yes you can run inference with multiple models at the same time. See Model Ensembling Tutorial below:

YOLOv5 Tutorials

sekisek commented 2 years ago

Hi @glenn-jocher , thanks, I read it! but can you combine 2 different models for example like car-plate-detection with 1 class and the regular yolov5x.pt or between any 2 customer models with different classes?

shizhanhao commented 2 years ago

I have tried, the two models must be a data set to use the method described by the author, which means that two models from different data sets cannot be used together, but you can use different models for the same photo Perform multiple inferences.

github-actions[bot] commented 2 years ago

👋 Hello, this issue has been automatically marked as stale because it has not had recent activity. Please note it will be closed if no further activity occurs.

Access additional YOLOv5 🚀 resources:

Access additional Ultralytics ⚡ resources:

Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!

Thank you for your contributions to YOLOv5 🚀 and Vision AI ⭐!

minhN2000 commented 2 years ago

hi @shizhanhao, may I ask what commands you need to add in order to have a photo uses different models? Thank you!

vayvay0993 commented 2 years ago

@shizhanhao I have the exactly same question with you, just want to know if you have the solution for this issue?

jjjonathan96 commented 2 years ago

My question is i have trained YOLOv5 model for hand held object detection and there are some model for face detection in YOLO so, I need to have a single model detecting faces and hand held objects. Is it possible to have singular model? Will meta-learning make sense?

glenn-jocher commented 2 years ago

See https://community.ultralytics.com/t/how-to-combine-weights-to-detect-from-multiple-datasets

hemalbeselial commented 9 months ago

hi i have a question i using a yolov5n model for object detection and i have a another code for face recognition now i add the face recognition code to detect.py code. when i run my code give me 2 separate result windows but i need both the face recognition and yolo object detection should happen in the same screen how can i do it

glenn-jocher commented 9 months ago

@hemalbeselial you can use the detect.py script to run both the face recognition and YOLO object detection in the same screen. You can simply integrate both models into the same script and define the necessary logic to display the results together in a single output screen.