ultralytics / yolov5

YOLOv5 🚀 in PyTorch > ONNX > CoreML > TFLite
https://docs.ultralytics.com
GNU Affero General Public License v3.0
51.02k stars 16.41k forks source link

Load YOLOv5 from PyTorch Hub ⭐ #36

Open glenn-jocher opened 4 years ago

glenn-jocher commented 4 years ago

📚 This guide explains how to load YOLOv5 🚀 from PyTorch Hub https://pytorch.org/hub/ultralytics_yolov5. See YOLOv5 Docs for additional details. UPDATED 26 March 2023.

Before You Start

Install requirements.txt in a Python>=3.7.0 environment, including PyTorch>=1.7. Models and datasets download automatically from the latest YOLOv5 release.

pip install -r https://raw.githubusercontent.com/ultralytics/yolov5/master/requirements.txt

💡 ProTip: Cloning https://github.com/ultralytics/yolov5 is not required 😃

Load YOLOv5 with PyTorch Hub

Simple Example

This example loads a pretrained YOLOv5s model from PyTorch Hub as model and passes an image for inference. 'yolov5s' is the lightest and fastest YOLOv5 model. For details on all available models please see the README.

import torch

# Model
model = torch.hub.load('ultralytics/yolov5', 'yolov5s')

# Image
im = 'https://ultralytics.com/images/zidane.jpg'

# Inference
results = model(im)

results.pandas().xyxy[0]
#      xmin    ymin    xmax   ymax  confidence  class    name
# 0  749.50   43.50  1148.0  704.5    0.874023      0  person
# 1  433.50  433.50   517.5  714.5    0.687988     27     tie
# 2  114.75  195.75  1095.0  708.0    0.624512      0  person
# 3  986.00  304.00  1028.0  420.0    0.286865     27     tie

Detailed Example

This example shows batched inference with PIL and OpenCV image sources. results can be printed to console, saved to runs/hub, showed to screen on supported environments, and returned as tensors or pandas dataframes.

import cv2
import torch
from PIL import Image

# Model
model = torch.hub.load('ultralytics/yolov5', 'yolov5s')

# Images
for f in 'zidane.jpg', 'bus.jpg':
    torch.hub.download_url_to_file('https://ultralytics.com/images/' + f, f)  # download 2 images
im1 = Image.open('zidane.jpg')  # PIL image
im2 = cv2.imread('bus.jpg')[..., ::-1]  # OpenCV image (BGR to RGB)

# Inference
results = model([im1, im2], size=640) # batch of images

# Results
results.print()  
results.save()  # or .show()

results.xyxy[0]  # im1 predictions (tensor)
results.pandas().xyxy[0]  # im1 predictions (pandas)
#      xmin    ymin    xmax   ymax  confidence  class    name
# 0  749.50   43.50  1148.0  704.5    0.874023      0  person
# 1  433.50  433.50   517.5  714.5    0.687988     27     tie
# 2  114.75  195.75  1095.0  708.0    0.624512      0  person
# 3  986.00  304.00  1028.0  420.0    0.286865     27     tie

For all inference options see YOLOv5 AutoShape() forward method: https://github.com/ultralytics/yolov5/blob/30e4c4f09297b67afedf8b2bcd851833ddc9dead/models/common.py#L243-L252

Inference Settings

YOLOv5 models contain various inference attributes such as confidence threshold, IoU threshold, etc. which can be set by:

model.conf = 0.25  # NMS confidence threshold
      iou = 0.45  # NMS IoU threshold
      agnostic = False  # NMS class-agnostic
      multi_label = False  # NMS multiple labels per box
      classes = None  # (optional list) filter by class, i.e. = [0, 15, 16] for COCO persons, cats and dogs
      max_det = 1000  # maximum number of detections per image
      amp = False  # Automatic Mixed Precision (AMP) inference

results = model(im, size=320)  # custom inference size

Device

Models can be transferred to any device after creation:

model.cpu()  # CPU
model.cuda()  # GPU
model.to(device)  # i.e. device=torch.device(0)

Models can also be created directly on any device:

model = torch.hub.load('ultralytics/yolov5', 'yolov5s', device='cpu')  # load on CPU

💡 ProTip: Input images are automatically transferred to the correct model device before inference.

Silence Outputs

Models can be loaded silently with _verbose=False:

model = torch.hub.load('ultralytics/yolov5', 'yolov5s', _verbose=False)  # load silently

Input Channels

To load a pretrained YOLOv5s model with 4 input channels rather than the default 3:

model = torch.hub.load('ultralytics/yolov5', 'yolov5s', channels=4)

In this case the model will be composed of pretrained weights except for the very first input layer, which is no longer the same shape as the pretrained input layer. The input layer will remain initialized by random weights.

Number of Classes

To load a pretrained YOLOv5s model with 10 output classes rather than the default 80:

model = torch.hub.load('ultralytics/yolov5', 'yolov5s', classes=10)

In this case the model will be composed of pretrained weights except for the output layers, which are no longer the same shape as the pretrained output layers. The output layers will remain initialized by random weights.

Force Reload

If you run into problems with the above steps, setting force_reload=True may help by discarding the existing cache and force a fresh download of the latest YOLOv5 version from PyTorch Hub.

model = torch.hub.load('ultralytics/yolov5', 'yolov5s', force_reload=True)  # force reload

Screenshot Inference

To run inference on your desktop screen:

import torch
from PIL import ImageGrab

# Model
model = torch.hub.load('ultralytics/yolov5', 'yolov5s')

# Image
im = ImageGrab.grab()  # take a screenshot

# Inference
results = model(im)

Multi-GPU Inference

YOLOv5 models can be be loaded to multiple GPUs in parallel with threaded inference:

import torch
import threading

def run(model, im):
  results = model(im)
  results.save()

# Models
model0 = torch.hub.load('ultralytics/yolov5', 'yolov5s', device=0)
model1 = torch.hub.load('ultralytics/yolov5', 'yolov5s', device=1)

# Inference
threading.Thread(target=run, args=[model0, 'https://ultralytics.com/images/zidane.jpg'], daemon=True).start()
threading.Thread(target=run, args=[model1, 'https://ultralytics.com/images/bus.jpg'], daemon=True).start()

Training

To load a YOLOv5 model for training rather than inference, set autoshape=False. To load a model with randomly initialized weights (to train from scratch) use pretrained=False. You must provide your own training script in this case. Alternatively see our YOLOv5 Train Custom Data Tutorial for model training.

model = torch.hub.load('ultralytics/yolov5', 'yolov5s', autoshape=False)  # load pretrained
model = torch.hub.load('ultralytics/yolov5', 'yolov5s', autoshape=False, pretrained=False)  # load scratch

Base64 Results

For use with API services. See https://github.com/ultralytics/yolov5/pull/2291 and Flask REST API example for details.

results = model(im)  # inference

results.ims # array of original images (as np array) passed to model for inference
results.render()  # updates results.ims with boxes and labels
for im in results.ims:
    buffered = BytesIO()
    im_base64 = Image.fromarray(im)
    im_base64.save(buffered, format="JPEG")
    print(base64.b64encode(buffered.getvalue()).decode('utf-8'))  # base64 encoded image with results

Cropped Results

Results can be returned and saved as detection crops:

results = model(im)  # inference
crops = results.crop(save=True)  # cropped detections dictionary

Pandas Results

Results can be returned as Pandas DataFrames:

results = model(im)  # inference
results.pandas().xyxy[0]  # Pandas DataFrame
Pandas Output (click to expand) ```python print(results.pandas().xyxy[0]) # xmin ymin xmax ymax confidence class name # 0 749.50 43.50 1148.0 704.5 0.874023 0 person # 1 433.50 433.50 517.5 714.5 0.687988 27 tie # 2 114.75 195.75 1095.0 708.0 0.624512 0 person # 3 986.00 304.00 1028.0 420.0 0.286865 27 tie ```

Sorted Results

Results can be sorted by column, i.e. to sort license plate digit detection left-to-right (x-axis):

results = model(im)  # inference
results.pandas().xyxy[0].sort_values('xmin')  # sorted left-right

Box-Cropped Results

Results can be returned and saved as detection crops:

results = model(im)  # inference
crops = results.crop(save=True)  # cropped detections dictionary

JSON Results

Results can be returned in JSON format once converted to .pandas() dataframes using the .to_json() method. The JSON format can be modified using the orient argument. See pandas .to_json() documentation for details.

results = model(ims)  # inference
results.pandas().xyxy[0].to_json(orient="records")  # JSON img1 predictions
JSON Output (click to expand) ```json [ {"xmin":749.5,"ymin":43.5,"xmax":1148.0,"ymax":704.5,"confidence":0.8740234375,"class":0,"name":"person"}, {"xmin":433.5,"ymin":433.5,"xmax":517.5,"ymax":714.5,"confidence":0.6879882812,"class":27,"name":"tie"}, {"xmin":115.25,"ymin":195.75,"xmax":1096.0,"ymax":708.0,"confidence":0.6254882812,"class":0,"name":"person"}, {"xmin":986.0,"ymin":304.0,"xmax":1028.0,"ymax":420.0,"confidence":0.2873535156,"class":27,"name":"tie"} ] ```

Custom Models

This example loads a custom 20-class VOC-trained YOLOv5s model 'best.pt' with PyTorch Hub.

model = torch.hub.load('ultralytics/yolov5', 'custom', path='path/to/best.pt')  # local model
model = torch.hub.load('path/to/yolov5', 'custom', path='path/to/best.pt', source='local')  # local repo

TensorRT, ONNX and OpenVINO Models

PyTorch Hub supports inference on most YOLOv5 export formats, including custom trained models. See TFLite, ONNX, CoreML, TensorRT Export tutorial for details on exporting models.

💡 ProTip: TensorRT may be up to 2-5X faster than PyTorch on GPU benchmarks 💡 ProTip: ONNX and OpenVINO may be up to 2-3X faster than PyTorch on CPU benchmarks

model = torch.hub.load('ultralytics/yolov5', 'custom', path='yolov5s.pt')  # PyTorch
                                                            'yolov5s.torchscript')  # TorchScript
                                                            'yolov5s.onnx')  # ONNX
                                                            'yolov5s_openvino_model/')  # OpenVINO
                                                            'yolov5s.engine')  # TensorRT
                                                            'yolov5s.mlmodel')  # CoreML (macOS-only)
                                                            'yolov5s.tflite')  # TFLite
                                                            'yolov5s_paddle_model/')  # PaddlePaddle

Environments

YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

YOLOv5 CI

If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training, validation, inference, export and benchmarks on MacOS, Windows, and Ubuntu every 24 hours and on every commit.

ghost commented 3 years ago

@glenn-jocher when I use the following line of code for yolov3-tiny model = torch.hub.load('ultralytics/yolov3', 'yolov3-tiny', force_reload=True).autoshape() it gives me the following error..

Downloading: "https://github.com/ultralytics/yolov3/archive/master.zip" to C:\Users\Asim/.cache\torch\hub\master.zip Traceback (most recent call last): File "E:/Face Mask 3Class Yolov5/new_hub.py", line 7, in model = torch.hub.load('ultralytics/yolov3', 'yolov3-tiny', force_reload=True).autoshape() File "C:\Users\Asim\anaconda3\lib\site-packages\torch\hub.py", line 339, in load model = _load_local(repo_or_dir, model, *args, **kwargs) File "C:\Users\Asim\anaconda3\lib\site-packages\torch\hub.py", line 367, in _load_local entry = _load_entry_from_hubconf(hub_module, model) File "C:\Users\Asim\anaconda3\lib\site-packages\torch\hub.py", line 187, in _load_entry_from_hubconf raise RuntimeError('Cannot find callable {} in hubconf'.format(model)) RuntimeError: Cannot find callable yolov3-tiny in hubconf

glenn-jocher commented 3 years ago

PyTorch Hub model names do not support dashes, you need to use an underscore:

Screen Shot 2021-03-07 at 8 08 30 PM
ghost commented 3 years ago

@glenn-jocher when I use the following line of code : model = torch.hub.load('ultralytics/yolov3', 'yolov3_tiny', pretrained=True, force_reload=True).autoshape() It gives me error :
Traceback (most recent call last): File "C:\Users\Asim/.cache\torch\hub\ultralytics_yolov3_master\hubconf.py", line 37, in create attempt_download(fname) # download if not found locally File "C:\Users\Asim/.cache\torch\hub\ultralytics_yolov3_master\utils\google_utils.py", line 30, in attempt_download tag = subprocess.check_output('git tag', shell=True).decode().split()[-1] IndexError: list index out of range

The above exception was the direct cause of the following exception:

Traceback (most recent call last): File "C:/Users/Asim/Desktop/Free Lance/New folder/camera-live-streaming/app.py", line 9, in model = torch.hub.load('ultralytics/yolov3', 'yolov3_tiny', pretrained=True, force_reload=True).autoshape() File "C:\Users\Asim\anaconda3\lib\site-packages\torch\hub.py", line 339, in load model = _load_local(repo_or_dir, model, *args, *kwargs) File "C:\Users\Asim\anaconda3\lib\site-packages\torch\hub.py", line 368, in _load_local model = entry(args, **kwargs) File "C:\Users\Asim/.cache\torch\hub\ultralytics_yolov3_master\hubconf.py", line 93, in yolov3_tiny return create('yolov3-tiny', pretrained, channels, classes, autoshape) File "C:\Users\Asim/.cache\torch\hub\ultralytics_yolov3_master\hubconf.py", line 51, in create raise Exception(s) from e Exception: Cache maybe be out of date, try force_reload=True. See https://docs.ultralytics.com/yolov5/tutorials/pytorch_hub_model_loading for help.

But if I use this line of code: model = torch.hub.load('ultralytics/yolov3', 'yolov3_tiny', force_reload=True).autoshape() It doesn't give any error but also does not detect anything.

glenn-jocher commented 3 years ago

@asim266 this the YOLOv5 PyTorch Hub tutorial. For questions about other repositories I would recommend you raise an issue there.

bipinkc19 commented 3 years ago

@glenn-jocher First of thank you for the fantastic community page and help from you guys.

One question:

How do I save the model locally and load the model from local file in torch-hub for yolov5. This is for case where there is no access to internet.

glenn-jocher commented 3 years ago

@bipinkc19 PyTorch Hub commands only need internet access the first time they are run, to download a cached copy of this repo. After this first time the cache is saved to disk and located for use in subsequent calls.

deepconsc commented 3 years ago

Whoever struggles with the nms error with CUDA backend:

RuntimeError: Could not run 'torchvision::nms' with arguments from the 'CUDA' backend.

Update your torchvision to 0.8.1, that should resolve it. 🤟🏼

ghost commented 3 years ago

Is there a way to specify a specific class like ["person", "cat"] to only identify person and cat?

glenn-jocher commented 3 years ago

@Xcalizorz hi good question! I've updated the tutorial above with details on how to filter inference results by class:

Inference Settings

Inference settings such as confidence threshold, NMS IoU threshold, and classes filter are model attributes, and can be modified by:

model.conf = 0.25  # confidence threshold (0-1)
model.iou = 0.45  # NMS IoU threshold (0-1)
model.classes = None  # (optional list) filter by class, i.e. = [0, 15, 16] for persons, cats and dogs

results = model(imgs, size=320)  # custom inference size
Lauler commented 3 years ago

Could you please provide some more details on the Training section.

How does one properly pass the bounding box data and labels here when using a dataloader? Would very much appreciate an example with some skeleton code.

glenn-jocher commented 3 years ago

@Lauler see Train Custom Data tutorial to get started with training:

YOLOv5 Tutorials

Lauler commented 3 years ago

@glenn-jocher Thanks. I had already read Train Custom Data. I was under the impression that loading model from torch.hub may have allowed more flexibility in allowing the user to specify their own Dataset similar to the PennFudanDataset in this tutorial: https://pytorch.org/tutorials/intermediate/torchvision_tutorial.html , since you don't expect users to clone the yolov5 repo. But I should still organize the data according to the Train Custom Data-guide?

I think ultimately at some point in the future it is easier if users of object detection libraries can organize their data however they want and create their own train/validation dataloaders (similar to image classification tasks) as opposed to being forced to shuffle image files in folders with specific format requirements.

This is just a general remark (don't take it as negative criticism) about the design API of object detection libraries versus what has become the standard in image classification. Object detection libraries are not very flexible in comparison, and hard to adapt to your own needs or your own validation schemes (cross validation).

I will use the official way as described!

glenn-jocher commented 3 years ago

@Lauler you can use Hub models for any purpose including training. Hub models provide nothing else except a model, you must build your own training/inference infrastructure for whatever custom purposes you have.

Fully managed solutions for training, testing, and inference are also available in train.py, test.py, detect.py.

ZixuanLingit666 commented 3 years ago

why I set the parameter of force_reload=True and try many times, that under problem can't be solved?

image

glenn-jocher commented 3 years ago

@rerester hi sorry to see you are having problems. It's hard to determine what your issue may be from the small screenshot you have pasted. If you believe you have a reproducible bug, raise a new issue using the 🐛 Bug Report template, providing screenshots and a minimum reproducible example to help us better understand and diagnose your problem. Thank you!

suyong2 commented 3 years ago

@glenn-jocher I get an error "RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU." at the 'model = torch.hub.load()' line when I run my custom model (which was trained at GPU environment) at CPU environment. How can I use GPU-trained custom model(which is trained by Yolov5 git version not pytorch hub) in CPU environment with pytorch hub?(of course, the custom model works well in GPU environment with pytorch hub)

glenn-jocher commented 3 years ago

@suyong2 backend assignment is handled automatically in YOLOv5 PyTorch Hub models, so if you have a GPU your model will load there, if not it will load on CPU.

If this does not answer your question and you believe you have a reproducible issue, we suggest you raise a new issue using the 🐛 Bug Report template, providing screenshots and a minimum reproducible example to help us better understand and diagnose your problem. Thank you!

pravastacaraka commented 3 years ago

@glenn-jocher, is PyTorch hub support video inference?

glenn-jocher commented 3 years ago

@pravastacaraka PyTorch Hub can support any inference as long as you build a dataloader for it.

For a fully managed inference solution see detect.py.

xinxin342 commented 3 years ago

@glenn-jocher Thanks for the tutorial. I don't know why, but the last 3 lines of code don't work. 屏幕截图 2021-04-28 200153

glenn-jocher commented 3 years ago

@xinxin342 the last 3 lines work correctly.

In python outputs are suppressed. If you want to print outputs you can use the print() function.

xinxin342 commented 3 years ago

@glenn-jocher
Thank you for solving my question so quickly.

rullisubekti commented 3 years ago

Hallo @glenn-jocher, can i load yolov5 in my local directory with "torch.hub.load("mydir/yolov5/", "yolov5s")", i was try it but get error "too many values to unpack (expected 2)"

glenn-jocher commented 3 years ago

@rullisubekti your code demonstrates incorrect usage. For correct usage read the tutorial above.

PascalHbr commented 3 years ago

Is there an easy way to make inference on my own model? When I try to follow the same steps, I run into all sorts of problems. Using a custom .pt file doesn't work out of the box. I have been trying to use the autoshape wrapper provided in common.py, but I get the following error (by the way, there is a bug in the autoshape class, self.stride is not defined)

RuntimeError: Sizes of tensors must match except in dimension 1. Got 18 and 17 in dimension 2 (The offending index is 1)

glenn-jocher commented 3 years ago

@PascalHbr loading custom YOLOv5 models in PyTorch Hub is very easy, see the 'Custom Models' section in the above tutorial.

pravastacaraka commented 3 years ago

@glenn-jocher how can run training with PyTorch hub? from your intructions:

Training

To load a YOLOv5 model for training rather than inference, set autoshape=False. To load a model with randomly initialized ?> weights (to train from scratch) use pretrained=False.

model = torch.hub.load('ultralytics/yolov5', 'yolov5s', autoshape=False)  # load pretrained
model = torch.hub.load('ultralytics/yolov5', 'yolov5s', autoshape=False, pretrained=False)  # load scratch

Then what should I do?

glenn-jocher commented 3 years ago

@pravastacaraka to train YOLOv5 models see Train Custom Data tutorial:

YOLOv5 Tutorials

pravastacaraka commented 3 years ago

@glenn-jocher That answer didn't help me. I will clarify my question.

Can I train my dataset using the PyTorch hub instead of using train.py? Because based on the information you provide above there is a Training section:

Training

To load a YOLOv5 model for training rather than inference, set autoshape=False. To load a model with randomly initialized ?> weights (to train from scratch), use pretrained=False.

model = torch.hub.load('ultralytics/yolov5', 'yolov5s', autoshape=False)  # load pretrained
model = torch.hub.load('ultralytics/yolov5', 'yolov5s', autoshape=False, pretrained=False)  # load scratch

I thought I could use a PyTorch hub to train my dataset. If so, how do I pass these model variables to my dataset? Is it like this?

results = model('path/to/my-dataset')
dimzog commented 3 years ago

@glenn-jocher That answer didn't help me. I will clarify my question.

Can I train my dataset using the PyTorch hub instead of using train.py? Because based on the information you provide above there is a Training section:

Training

To load a YOLOv5 model for training rather than inference, set autoshape=False. To load a model with randomly initialized ?> weights (to train from scratch), use pretrained=False.

model = torch.hub.load('ultralytics/yolov5', 'yolov5s', autoshape=False)  # load pretrained
model = torch.hub.load('ultralytics/yolov5', 'yolov5s', autoshape=False, pretrained=False)  # load scratch

I thought I could use a PyTorch hub to train my dataset. If so, how do I pass these model variables to my dataset? Is it like this?

results = model('path/to/my-dataset')

I think one should implement his own trainer(), correct me if i'm wrong @glenn-jocher .

pravastacaraka commented 3 years ago

So the conclusion is that we can't do training using the PyTorch hub, right?

Training

To load a YOLOv5 model for training rather than inference, set autoshape=False. To load a model with randomly initialized ?> weights (to train from scratch), use pretrained=False.

model = torch.hub.load('ultralytics/yolov5', 'yolov5s', autoshape=False)  # load pretrained
model = torch.hub.load('ultralytics/yolov5', 'yolov5s', autoshape=False, pretrained=False)  # load scratch

Then what about the information provided above? is that meaningless? @dimzog @glenn-jocher

glenn-jocher commented 3 years ago

@dimzog @pravastacaraka PyTorch Hub provides a pathway for defining models, nothing more. What you do with that model is up to you, though you are required to create the functionality you want.

As I said, for a fully managed training solution I would recommend train.py.

Dylan-H-Wang commented 3 years ago

When I was using the model loaded by torch.hub, it seems like print(), pandas() these nice functions only work when the input are non-tensor. If the input is tensor, the output will be a list. My question is:

  1. what does this list mean, how can I use them?
  2. Anyway to use pandas()if input data are tensors? Thank you!
glenn-jocher commented 3 years ago

@Dylan-H-Wang yes the current intended behavior for torch inputs is simply for the AutoShape() wrapper to act as a pass-through. No preprocess, posprocessing or NMS is done, and no results object is generated. This is the default use case in train.py, test.py, detect.py, and yolo.py. https://github.com/ultralytics/yolov5/blob/ffb47ffbebaef1d54d177bc339a108a7003357f8/models/common.py#L253-L255

rullisubekti commented 3 years ago

hello @glenn-jocher, I got some issue, when I run detect.py and load the model using torch.hub.load, with the same sample data and file weight. I get a different detection result and xyxy value return too, why? Thank you!

glenn-jocher commented 3 years ago

@rullisubekti these two topics are seperate. detect.py is a fully managed inference solution that does not use the AutoShape() wrapper. YOLOv5 PyTorch Hub models are intended for your own custom python workflows and utilize the AutoShape() wrapper.

Laudarisd commented 3 years ago

Hello, everyone, I am stuck here, can anyone give me hints. I tried to import custom model and get the prediction boxes as it is given in example. I did this so far, it detects how many classes are there in the images but doesn't show xmin, ymin, xmax and ymax.

import cv2
import torch
from PIL import Image
import glob

#model
path = "./"
#model = torch.load('./last.pt')
model = torch.hub.load('ultralytics/yolov5', 'custom', path='./best.pt')  # custom model
CUDA_VISIBLE_DEVICES = "0"

model.conf = 0.25  # confidence threshold (0-1)
model.iou = 0.45  # NMS IoU threshold (0-1)

dataset_name = 'test_1'
test_img_path = './' + dataset_name + '/*.png'

test_imgs = sorted(glob.glob(test_img_path))
print(len(test_imgs))

for img in test_imgs:
    #print(img)
    #file_name = img.split('/')[-1]
    image = cv2.imread(img)
    img1 = Image.open(img)
    #print(img)
    img2 = cv2.imread(img)[:, :, ::-1]
    imgs = [img2]
    #print(img2)
    results = model(imgs, size = 640)
    results.print()
    results.xyxy[0]
    results.pandas().xyxy[0]

this is the result

5
image 1/1: 1023x1920 4 yess
Speed: 15.6ms pre-process, 27.5ms inference, 1.3ms NMS per image at shape (1, 3, 352, 640)
image 1/1: 1023x1920 6 yess
.........

Any help would be appreciated.

Thanks a lot.

glenn-jocher commented 3 years ago

@Laudarisd in Python if you want to see the contents of a variable you might want to print it's value.

dllu commented 3 years ago

The first simple example doesn't seem to work...

(env) zxcv > cat wtf.py
#!/usr/bin/env python3
import torch

# Model
model = torch.hub.load('ultralytics/yolov5', 'yolov5s')

# Image
img = 'https://ultralytics.com/images/zidane.jpg'

# Inference
results = model(img)

results.print()

(env) zxcv > ./wtf.py
Downloading: "https://github.com/ultralytics/yolov5/archive/master.zip" to /home/dllu/.cache/torch/hub/master.zip
Fusing layers...
Model Summary: 224 layers, 7266973 parameters, 0 gradients
Adding AutoShape...
YOLOv5 🚀 2021-5-25 torch 1.9.0.dev20210525+cu111 CUDA:0 (NVIDIA GeForce RTX 3090, 24234.625MB)

/home/dllu/zxcv/env/lib/python3.9/site-packages/torch/nn/functional.py:718: UserWarning: Named tensors and all their associated APIs are an experimental feature and subject to change. Please do not use them for anything important until they are released as stable. (Triggered internally at  /pytorch/c10/core/TensorImpl.h:1260.)
  return torch.max_pool2d(input, kernel_size, stride, padding, dilation, ceil_mode)
Traceback (most recent call last):
  File "/home/dllu/zxcv/./wtf.py", line 13, in <module>
    results.print()
  File "/home/dllu/.cache/torch/hub/ultralytics_yolov5_master/models/common.py", line 344, in print
    self.display(pprint=True)  # print results
  File "/home/dllu/.cache/torch/hub/ultralytics_yolov5_master/models/common.py", line 322, in display
    str += f"{n} {self.names[int(c)]}{'s' * (n > 1)}, "  # add to string
IndexError: list index out of range

EDIT: I deleted the line from .cache/torch/hub/ultralytics_yolov5_master/models/common.py, line 322 and now it works. Seems like a bug though.

glenn-jocher commented 3 years ago

@dllu both examples work correctly, just checked:

Screenshot 2021-05-26 at 00 49 28

How to create a Minimal, Reproducible Example

When asking a question, people will be better able to provide help if you provide code that they can easily understand and use to reproduce the problem. This is referred to by community members as creating a minimum reproducible example. Your code that reproduces the problem should be:

In addition to the above requirements, for Ultralytics to provide assistance your code should be:

If you believe your problem meets all of the above criteria, please close this issue and raise a new one using the 🐛 Bug Report template and providing a minimum reproducible example to help us better understand and diagnose your problem.

Thank you! 😃

dllu commented 3 years ago

Hi @glenn-jocher, upon further debugging it seems to be a bug with Pytorch. Very strange --- I'll dig a bit further. https://github.com/pytorch/pytorch/issues/58959

Laudarisd commented 3 years ago

@dllu Actually I also encountered the same problem while doing inference in Docker. Strange thing is there is no problem when I run detect code in local. My local pc has UBUNTU 20.04. I guess this is a issue from pyhton version. But I am not sure.

Laudarisd commented 3 years ago

Hi @glenn-jocher here in Python if you want to see the contents of a variable you might want to print it's value. could you give me some hints to visualize variables?

Thank you.

glenn-jocher commented 3 years ago

@Laudarisd

x=1
print(x)
lonnylundsten commented 3 years ago

Can we run inference on a video with YOLOv5 in PyTorch Hub? If so, can you show a brief example of that.

open video

vid1 = cv2.VideoCapture('/path/to/video.mp4')

Inference

results = model(vid1, size=640)

glenn-jocher commented 3 years ago

@lonnylundsten YOLOv5 PyTorch Hub inference is meant for integration into your own python workflows.

For a fully managed inference solution you can use detect.py.

jmayank23 commented 3 years ago

I used the command given in the documentation to load a custom model- model = torch.hub.load('ultralytics/yolov5', 'custom', path='/content/yolov5/runs/train/yolov5s_results3/weights/best.pt') # default

But got the following error- ImportError: cannot import name 'save_one_box' from 'utils.general' (/content/yolov5/utils/general.py)

Further I checked if that was the case but noticed that the function is there in general.py

Please help

Screenshot 2021-06-05 at 1 21 59 AM
glenn-jocher commented 3 years ago

@jmayank23 👋 hi, thanks for letting us know about this problem with YOLOv5 🚀. We've created a few short guidelines below to help users provide what we need in order to get started investigating a possible problem.

How to create a Minimal, Reproducible Example

When asking a question, people will be better able to provide help if you provide code that they can easily understand and use to reproduce the problem. This is referred to by community members as creating a minimum reproducible example. Your code that reproduces the problem should be:

In addition to the above requirements, for Ultralytics to provide assistance your code should be:

If you believe your problem meets all of the above criteria, please close this issue and raise a new one using the 🐛 Bug Report template and providing a minimum reproducible example to help us better understand and diagnose your problem.

Thank you! 😃

almog-gueta commented 3 years ago

Hello, I want to train the YOLOv5 model from scratch (not using the pretrained weights) on my own dataset and classes for a task of Face Mask Detection.

I have seen that in order to train I should load: model = torch.hub.load('ultralytics/yolov5', 'yolov5s', autoshape=False, pretrained=False) # load scratch

However, How do I actually train it? Can I use it as one layer in my model?

Thank you, Almog

glenn-jocher commented 3 years ago

@almog-gueta see Train Custom Data tutorial:

YOLOv5 Tutorials