ultralytics / yolov5

YOLOv5 🚀 in PyTorch > ONNX > CoreML > TFLite
https://docs.ultralytics.com
GNU Affero General Public License v3.0
50.65k stars 16.33k forks source link

How to use multiple weights to detect a video? #8881

Closed sourabmaity closed 2 years ago

sourabmaity commented 2 years ago

Search before asking

Question

I have a pen.pt model I want to use it with yolov5m.pt to detect a video. I used !python detect.py --weights pen.pt yolov5m.pt --img 640 --conf 0.50 --source /content/latest.mp4 but it throws below error AssertionError: Models have different class counts: [80, 1]

How to use multiple models at the same time?

pls help

Additional

I don't have a dataset to retrain it to convert it into a one.

github-actions[bot] commented 2 years ago

👋 Hello @sourabmaity, thank you for your interest in YOLOv5 🚀! Please visit our ⭐️ Tutorials to get started, where you can find quickstart guides for simple tasks like Custom Data Training all the way to advanced concepts like Hyperparameter Evolution.

If this is a 🐛 Bug Report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you.

If this is a custom training ❓ Question, please provide as much information as possible, including dataset images, training logs, screenshots, and a public link to online W&B logging if available.

For business inquiries or professional support requests please visit https://ultralytics.com or email support@ultralytics.com.

Requirements

Python>=3.7.0 with all requirements.txt installed including PyTorch>=1.7. To get started:

git clone https://github.com/ultralytics/yolov5  # clone
cd yolov5
pip install -r requirements.txt  # install

Environments

YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

CI CPU testing

If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training (train.py), validation (val.py), inference (detect.py) and export (export.py) on macOS, Windows, and Ubuntu every 24 hours and on every commit.

glenn-jocher commented 2 years ago

@sourabmaity you can load multiple models with PyTorch Hub and have them run inference in a video frame loop. YOLOv5 🚀 PyTorch Hub models allow for simple model loading and inference in a pure python environment without using detect.py.

Simple Inference Example

This example loads a pretrained YOLOv5s model from PyTorch Hub as model and passes an image for inference. 'yolov5s' is the YOLOv5 'small' model. For details on all available models please see the README. Custom models can also be loaded, including custom trained PyTorch models and their exported variants, i.e. ONNX, TensorRT, TensorFlow, OpenVINO YOLOv5 models.

import torch

# Model
model = torch.hub.load('ultralytics/yolov5', 'yolov5s')  # yolov5n - yolov5x6 official model
#                                            'custom', 'path/to/best.pt')  # custom model

# Images
im = 'https://ultralytics.com/images/zidane.jpg'  # or file, Path, URL, PIL, OpenCV, numpy, list

# Inference
results = model(im)

# Results
results.print()  # or .show(), .save(), .crop(), .pandas(), etc.
results.xyxy[0]  # im predictions (tensor)

results.pandas().xyxy[0]  # im predictions (pandas)
#      xmin    ymin    xmax   ymax  confidence  class    name
# 0  749.50   43.50  1148.0  704.5    0.874023      0  person
# 2  114.75  195.75  1095.0  708.0    0.624512      0  person
# 3  986.00  304.00  1028.0  420.0    0.286865     27     tie

results.pandas().xyxy[0].value_counts('name')  # class counts (pandas)
# person    2
# tie       1

See YOLOv5 PyTorch Hub Tutorial for details.

Good luck 🍀 and let us know if you have any other questions!

sourabmaity commented 2 years ago

I have a pen.pt (which detects a pen) model and a cap.pt ( which detect pen cap) model now I run

! python detect.py --weights cap.pt pen.pt --img 640 --conf 0.50 --source VID_20220727_185703.mp4

its run properly and detects pens and its cap but it shows labels as cap only for both

how to solve this label problem?? without retrain

sourabmaity commented 2 years ago

@glenn-jocher sorry for the tag. I am waiting for your answer , still confused about the label

glenn-jocher commented 2 years ago

@sourabmaity ensembling models must be trained on the same exact classes. As I mentioned above if you want to run multiple models PyTorch Hub inference is the correct way.

stphtan94117 commented 1 year ago

@glenn-jocher I have two custom weights, one for plate mask and the other for plate number.

I'm using PyTorch to load the two weights, but it seems that I can't detect and save both results at the same time. Below is my code.

import torch
import os
import json

# Model
model = torch.hub.load('./yolov5', 'custom', path='mask.pt', source="local")
model = torch.hub.load('./yolov5', 'custom', path='plate.pt', source="local")
model.conf = 0.7

# Folder path
folder_path = 'ABC'  

# Get list of image files in the folder
image_files = [os.path.join(folder_path, file) for file in os.listdir(folder_path) if file.endswith(('.jpg', '.JPG', '.png', '.PNG'))]

# Inference for each image
for image_file in image_files:

    # Image
    im = image_file

    # Inference
    results = model(im, size=1280)

    # Get bounding box results
    bounding_boxes = results.pandas().xyxy[0].sort_values('xmin')[['name']].to_json(orient="records")

    # Convert bounding_boxes to a list of dictionaries
    bounding_boxes = json.loads(bounding_boxes)
    path = 'test'

    results.save(save_dir=path, exist_ok=True)
glenn-jocher commented 11 months ago

@stphtan94117 the issue arises because you are re-assigning the model variable with the plate detection model after assigning it the mask detection model, which results in overwriting the previous model. To solve the problem without retraining, you can run the inference with both models separately.

import torch
import os
import json

# Model for mask
model_mask = torch.hub.load('./yolov5', 'custom', path='mask.pt', source="local")
model_mask.conf = 0.7

# Model for plate
model_plate = torch.hub.load('./yolov5', 'custom', path='plate.pt', source="local")
model_plate.conf = 0.7

# Folder path
folder_path = 'ABC'  

# Get list of image files in the folder
image_files = [os.path.join(folder_path, file) for file in os.listdir(folder_path) if file.endswith(('.jpg', '.JPG', '.png', '.PNG'))]

# Inference for each image
for image_file in image_files:

    # Image
    im = image_file

    # Inference for mask
    results_mask = model_mask(im, size=1280)

    # Get mask bounding box results
    bounding_boxes_mask = results_mask.pandas().xyxy[0].sort_values('xmin')[['name']].to_json(orient="records")

    # Convert bounding_boxes_mask to a list of dictionaries
    bounding_boxes_mask = json.loads(bounding_boxes_mask)

    # Inference for plate
    results_plate = model_plate(im, size=1280)

    # Get plate bounding box results
    bounding_boxes_plate = results_plate.pandas().xyxy[0].sort_values('xmin')[['name']].to_json(orient="records")

    # Convert bounding_boxes_plate to a list of dictionaries
    bounding_boxes_plate = json.loads(bounding_boxes_plate)

    # Save results
    path = 'test'
    results_mask.save(save_dir=os.path.join(path, 'mask'), exist_ok=True)
    results_plate.save(save_dir=os.path.join(path, 'plate'), exist_ok=True)

This way, you can run inference with both models separately and save their results independently without retraining. Let me know if you have any more questions!