Open glenn-jocher opened 4 years ago
@glenn-jocher when I use the following line of code for yolov3-tiny
model = torch.hub.load('ultralytics/yolov3', 'yolov3-tiny', force_reload=True).autoshape()
it gives me the following error..
Downloading: "https://github.com/ultralytics/yolov3/archive/master.zip" to C:\Users\Asim/.cache\torch\hub\master.zip
Traceback (most recent call last):
File "E:/Face Mask 3Class Yolov5/new_hub.py", line 7, in
PyTorch Hub model names do not support dashes, you need to use an underscore:
@glenn-jocher when I use the following line of code :
model = torch.hub.load('ultralytics/yolov3', 'yolov3_tiny', pretrained=True, force_reload=True).autoshape()
It gives me error :
Traceback (most recent call last):
File "C:\Users\Asim/.cache\torch\hub\ultralytics_yolov3_master\hubconf.py", line 37, in create
attempt_download(fname) # download if not found locally
File "C:\Users\Asim/.cache\torch\hub\ultralytics_yolov3_master\utils\google_utils.py", line 30, in attempt_download
tag = subprocess.check_output('git tag', shell=True).decode().split()[-1]
IndexError: list index out of range
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:/Users/Asim/Desktop/Free Lance/New folder/camera-live-streaming/app.py", line 9, in
But if I use this line of code:
model = torch.hub.load('ultralytics/yolov3', 'yolov3_tiny', force_reload=True).autoshape()
It doesn't give any error but also does not detect anything.
@asim266 this the YOLOv5 PyTorch Hub tutorial. For questions about other repositories I would recommend you raise an issue there.
@glenn-jocher First of thank you for the fantastic community page and help from you guys.
One question:
How do I save the model locally and load the model from local file in torch-hub for yolov5. This is for case where there is no access to internet.
@bipinkc19 PyTorch Hub commands only need internet access the first time they are run, to download a cached copy of this repo. After this first time the cache is saved to disk and located for use in subsequent calls.
Whoever struggles with the nms error with CUDA backend:
RuntimeError: Could not run 'torchvision::nms' with arguments from the 'CUDA' backend.
Update your torchvision to 0.8.1, that should resolve it. 🤟🏼
Is there a way to specify a specific class like ["person", "cat"]
to only identify person
and cat
?
@Xcalizorz hi good question! I've updated the tutorial above with details on how to filter inference results by class:
Inference settings such as confidence threshold, NMS IoU threshold, and classes filter are model attributes, and can be modified by:
model.conf = 0.25 # confidence threshold (0-1)
model.iou = 0.45 # NMS IoU threshold (0-1)
model.classes = None # (optional list) filter by class, i.e. = [0, 15, 16] for persons, cats and dogs
results = model(imgs, size=320) # custom inference size
Could you please provide some more details on the Training section.
How does one properly pass the bounding box data and labels here when using a dataloader? Would very much appreciate an example with some skeleton code.
@Lauler see Train Custom Data tutorial to get started with training:
@glenn-jocher Thanks. I had already read Train Custom Data. I was under the impression that loading model from torch.hub
may have allowed more flexibility in allowing the user to specify their own Dataset similar to the PennFudanDataset in this tutorial: https://pytorch.org/tutorials/intermediate/torchvision_tutorial.html , since you don't expect users to clone the yolov5 repo. But I should still organize the data according to the Train Custom Data-guide?
I think ultimately at some point in the future it is easier if users of object detection libraries can organize their data however they want and create their own train/validation dataloaders (similar to image classification tasks) as opposed to being forced to shuffle image files in folders with specific format requirements.
This is just a general remark (don't take it as negative criticism) about the design API of object detection libraries versus what has become the standard in image classification. Object detection libraries are not very flexible in comparison, and hard to adapt to your own needs or your own validation schemes (cross validation).
I will use the official way as described!
@Lauler you can use Hub models for any purpose including training. Hub models provide nothing else except a model, you must build your own training/inference infrastructure for whatever custom purposes you have.
Fully managed solutions for training, testing, and inference are also available in train.py, test.py, detect.py.
why I set the parameter of force_reload=True and try many times, that under problem can't be solved?
@rerester hi sorry to see you are having problems. It's hard to determine what your issue may be from the small screenshot you have pasted. If you believe you have a reproducible bug, raise a new issue using the 🐛 Bug Report template, providing screenshots and a minimum reproducible example to help us better understand and diagnose your problem. Thank you!
@glenn-jocher I get an error "RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU." at the 'model = torch.hub.load()' line when I run my custom model (which was trained at GPU environment) at CPU environment. How can I use GPU-trained custom model(which is trained by Yolov5 git version not pytorch hub) in CPU environment with pytorch hub?(of course, the custom model works well in GPU environment with pytorch hub)
@suyong2 backend assignment is handled automatically in YOLOv5 PyTorch Hub models, so if you have a GPU your model will load there, if not it will load on CPU.
If this does not answer your question and you believe you have a reproducible issue, we suggest you raise a new issue using the 🐛 Bug Report template, providing screenshots and a minimum reproducible example to help us better understand and diagnose your problem. Thank you!
@glenn-jocher, is PyTorch hub support video inference?
@pravastacaraka PyTorch Hub can support any inference as long as you build a dataloader for it.
For a fully managed inference solution see detect.py.
@glenn-jocher Thanks for the tutorial. I don't know why, but the last 3 lines of code don't work.
@xinxin342 the last 3 lines work correctly.
In python outputs are suppressed. If you want to print outputs you can use the print() function.
@glenn-jocher
Thank you for solving my question so quickly.
Hallo @glenn-jocher, can i load yolov5 in my local directory with "torch.hub.load("mydir/yolov5/", "yolov5s")", i was try it but get error "too many values to unpack (expected 2)"
@rullisubekti your code demonstrates incorrect usage. For correct usage read the tutorial above.
Is there an easy way to make inference on my own model? When I try to follow the same steps, I run into all sorts of problems. Using a custom .pt
file doesn't work out of the box. I have been trying to use the autoshape
wrapper provided in common.py
, but I get the following error (by the way, there is a bug in the autoshape class, self.stride is not defined)
RuntimeError: Sizes of tensors must match except in dimension 1. Got 18 and 17 in dimension 2 (The offending index is 1)
@PascalHbr loading custom YOLOv5 models in PyTorch Hub is very easy, see the 'Custom Models' section in the above tutorial.
@glenn-jocher how can run training with PyTorch hub? from your intructions:
Training
To load a YOLOv5 model for training rather than inference, set autoshape=False. To load a model with randomly initialized ?> weights (to train from scratch) use pretrained=False.
model = torch.hub.load('ultralytics/yolov5', 'yolov5s', autoshape=False) # load pretrained model = torch.hub.load('ultralytics/yolov5', 'yolov5s', autoshape=False, pretrained=False) # load scratch
Then what should I do?
@pravastacaraka to train YOLOv5 models see Train Custom Data tutorial:
@glenn-jocher That answer didn't help me. I will clarify my question.
Can I train my dataset using the PyTorch hub instead of using train.py
? Because based on the information you provide above there is a Training section:
Training
To load a YOLOv5 model for training rather than inference, set autoshape=False. To load a model with randomly initialized ?> weights (to train from scratch), use pretrained=False.
model = torch.hub.load('ultralytics/yolov5', 'yolov5s', autoshape=False) # load pretrained model = torch.hub.load('ultralytics/yolov5', 'yolov5s', autoshape=False, pretrained=False) # load scratch
I thought I could use a PyTorch hub to train my dataset. If so, how do I pass these model variables to my dataset? Is it like this?
results = model('path/to/my-dataset')
@glenn-jocher That answer didn't help me. I will clarify my question.
Can I train my dataset using the PyTorch hub instead of using
train.py
? Because based on the information you provide above there is a Training section:Training
To load a YOLOv5 model for training rather than inference, set autoshape=False. To load a model with randomly initialized ?> weights (to train from scratch), use pretrained=False.
model = torch.hub.load('ultralytics/yolov5', 'yolov5s', autoshape=False) # load pretrained model = torch.hub.load('ultralytics/yolov5', 'yolov5s', autoshape=False, pretrained=False) # load scratch
I thought I could use a PyTorch hub to train my dataset. If so, how do I pass these model variables to my dataset? Is it like this?
results = model('path/to/my-dataset')
I think one should implement his own trainer(), correct me if i'm wrong @glenn-jocher .
So the conclusion is that we can't do training using the PyTorch hub, right?
Training
To load a YOLOv5 model for training rather than inference, set autoshape=False. To load a model with randomly initialized ?> weights (to train from scratch), use pretrained=False.
model = torch.hub.load('ultralytics/yolov5', 'yolov5s', autoshape=False) # load pretrained model = torch.hub.load('ultralytics/yolov5', 'yolov5s', autoshape=False, pretrained=False) # load scratch
Then what about the information provided above? is that meaningless? @dimzog @glenn-jocher
@dimzog @pravastacaraka PyTorch Hub provides a pathway for defining models, nothing more. What you do with that model is up to you, though you are required to create the functionality you want.
As I said, for a fully managed training solution I would recommend train.py
.
When I was using the model loaded by torch.hub
, it seems like print()
, pandas()
these nice functions only work when the input are non-tensor. If the input is tensor, the output will be a list. My question is:
pandas()
if input data are tensors?
Thank you!@Dylan-H-Wang yes the current intended behavior for torch inputs is simply for the AutoShape() wrapper to act as a pass-through. No preprocess, posprocessing or NMS is done, and no results
object is generated. This is the default use case in train.py, test.py, detect.py, and yolo.py.
https://github.com/ultralytics/yolov5/blob/ffb47ffbebaef1d54d177bc339a108a7003357f8/models/common.py#L253-L255
hello @glenn-jocher, I got some issue, when I run detect.py and load the model using torch.hub.load, with the same sample data and file weight. I get a different detection result and xyxy value return too, why? Thank you!
@rullisubekti these two topics are seperate. detect.py
is a fully managed inference solution that does not use the AutoShape()
wrapper. YOLOv5 PyTorch Hub models are intended for your own custom python workflows and utilize the AutoShape()
wrapper.
Hello, everyone, I am stuck here, can anyone give me hints. I tried to import custom model and get the prediction boxes as it is given in example. I did this so far, it detects how many classes are there in the images but doesn't show xmin, ymin, xmax and ymax.
import cv2
import torch
from PIL import Image
import glob
#model
path = "./"
#model = torch.load('./last.pt')
model = torch.hub.load('ultralytics/yolov5', 'custom', path='./best.pt') # custom model
CUDA_VISIBLE_DEVICES = "0"
model.conf = 0.25 # confidence threshold (0-1)
model.iou = 0.45 # NMS IoU threshold (0-1)
dataset_name = 'test_1'
test_img_path = './' + dataset_name + '/*.png'
test_imgs = sorted(glob.glob(test_img_path))
print(len(test_imgs))
for img in test_imgs:
#print(img)
#file_name = img.split('/')[-1]
image = cv2.imread(img)
img1 = Image.open(img)
#print(img)
img2 = cv2.imread(img)[:, :, ::-1]
imgs = [img2]
#print(img2)
results = model(imgs, size = 640)
results.print()
results.xyxy[0]
results.pandas().xyxy[0]
this is the result
5
image 1/1: 1023x1920 4 yess
Speed: 15.6ms pre-process, 27.5ms inference, 1.3ms NMS per image at shape (1, 3, 352, 640)
image 1/1: 1023x1920 6 yess
.........
Any help would be appreciated.
Thanks a lot.
@Laudarisd in Python if you want to see the contents of a variable you might want to print it's value.
The first simple example doesn't seem to work...
(env) zxcv > cat wtf.py
#!/usr/bin/env python3
import torch
# Model
model = torch.hub.load('ultralytics/yolov5', 'yolov5s')
# Image
img = 'https://ultralytics.com/images/zidane.jpg'
# Inference
results = model(img)
results.print()
(env) zxcv > ./wtf.py
Downloading: "https://github.com/ultralytics/yolov5/archive/master.zip" to /home/dllu/.cache/torch/hub/master.zip
Fusing layers...
Model Summary: 224 layers, 7266973 parameters, 0 gradients
Adding AutoShape...
YOLOv5 🚀 2021-5-25 torch 1.9.0.dev20210525+cu111 CUDA:0 (NVIDIA GeForce RTX 3090, 24234.625MB)
/home/dllu/zxcv/env/lib/python3.9/site-packages/torch/nn/functional.py:718: UserWarning: Named tensors and all their associated APIs are an experimental feature and subject to change. Please do not use them for anything important until they are released as stable. (Triggered internally at /pytorch/c10/core/TensorImpl.h:1260.)
return torch.max_pool2d(input, kernel_size, stride, padding, dilation, ceil_mode)
Traceback (most recent call last):
File "/home/dllu/zxcv/./wtf.py", line 13, in <module>
results.print()
File "/home/dllu/.cache/torch/hub/ultralytics_yolov5_master/models/common.py", line 344, in print
self.display(pprint=True) # print results
File "/home/dllu/.cache/torch/hub/ultralytics_yolov5_master/models/common.py", line 322, in display
str += f"{n} {self.names[int(c)]}{'s' * (n > 1)}, " # add to string
IndexError: list index out of range
EDIT: I deleted the line from .cache/torch/hub/ultralytics_yolov5_master/models/common.py
, line 322 and now it works. Seems like a bug though.
@dllu both examples work correctly, just checked:
When asking a question, people will be better able to provide help if you provide code that they can easily understand and use to reproduce the problem. This is referred to by community members as creating a minimum reproducible example. Your code that reproduces the problem should be:
In addition to the above requirements, for Ultralytics to provide assistance your code should be:
git pull
or git clone
a new copy to ensure your problem has not already been resolved by previous commits.If you believe your problem meets all of the above criteria, please close this issue and raise a new one using the 🐛 Bug Report template and providing a minimum reproducible example to help us better understand and diagnose your problem.
Thank you! 😃
Hi @glenn-jocher, upon further debugging it seems to be a bug with Pytorch. Very strange --- I'll dig a bit further. https://github.com/pytorch/pytorch/issues/58959
@dllu Actually I also encountered the same problem while doing inference in Docker. Strange thing is there is no problem when I run detect code in local. My local pc has UBUNTU 20.04. I guess this is a issue from pyhton version. But I am not sure.
Hi @glenn-jocher here in Python if you want to see the contents of a variable you might want to print it's value.
could you give me some hints to visualize variables?
Thank you.
@Laudarisd
x=1
print(x)
Can we run inference on a video with YOLOv5 in PyTorch Hub? If so, can you show a brief example of that.
vid1 = cv2.VideoCapture('/path/to/video.mp4')
results = model(vid1, size=640)
@lonnylundsten YOLOv5 PyTorch Hub inference is meant for integration into your own python workflows.
For a fully managed inference solution you can use detect.py.
I used the command given in the documentation to load a custom model- model = torch.hub.load('ultralytics/yolov5', 'custom', path='/content/yolov5/runs/train/yolov5s_results3/weights/best.pt') # default
But got the following error- ImportError: cannot import name 'save_one_box' from 'utils.general' (/content/yolov5/utils/general.py)
Further I checked if that was the case but noticed that the function is there in general.py
Please help
@jmayank23 👋 hi, thanks for letting us know about this problem with YOLOv5 🚀. We've created a few short guidelines below to help users provide what we need in order to get started investigating a possible problem.
When asking a question, people will be better able to provide help if you provide code that they can easily understand and use to reproduce the problem. This is referred to by community members as creating a minimum reproducible example. Your code that reproduces the problem should be:
In addition to the above requirements, for Ultralytics to provide assistance your code should be:
git pull
or git clone
a new copy to ensure your problem has not already been resolved by previous commits.If you believe your problem meets all of the above criteria, please close this issue and raise a new one using the 🐛 Bug Report template and providing a minimum reproducible example to help us better understand and diagnose your problem.
Thank you! 😃
Hello, I want to train the YOLOv5 model from scratch (not using the pretrained weights) on my own dataset and classes for a task of Face Mask Detection.
I have seen that in order to train I should load: model = torch.hub.load('ultralytics/yolov5', 'yolov5s', autoshape=False, pretrained=False) # load scratch
However, How do I actually train it? Can I use it as one layer in my model?
Thank you, Almog
@almog-gueta see Train Custom Data tutorial:
📚 This guide explains how to load YOLOv5 🚀 from PyTorch Hub https://pytorch.org/hub/ultralytics_yolov5. See YOLOv5 Docs for additional details. UPDATED 26 March 2023.
Before You Start
Install requirements.txt in a Python>=3.7.0 environment, including PyTorch>=1.7. Models and datasets download automatically from the latest YOLOv5 release.
💡 ProTip: Cloning https://github.com/ultralytics/yolov5 is not required 😃
Load YOLOv5 with PyTorch Hub
Simple Example
This example loads a pretrained YOLOv5s model from PyTorch Hub as
model
and passes an image for inference.'yolov5s'
is the lightest and fastest YOLOv5 model. For details on all available models please see the README.Detailed Example
This example shows batched inference with PIL and OpenCV image sources.
results
can be printed to console, saved toruns/hub
, showed to screen on supported environments, and returned as tensors or pandas dataframes.For all inference options see YOLOv5
AutoShape()
forward method: https://github.com/ultralytics/yolov5/blob/30e4c4f09297b67afedf8b2bcd851833ddc9dead/models/common.py#L243-L252Inference Settings
YOLOv5 models contain various inference attributes such as confidence threshold, IoU threshold, etc. which can be set by:
Device
Models can be transferred to any device after creation:
Models can also be created directly on any
device
:💡 ProTip: Input images are automatically transferred to the correct model device before inference.
Silence Outputs
Models can be loaded silently with
_verbose=False
:Input Channels
To load a pretrained YOLOv5s model with 4 input channels rather than the default 3:
In this case the model will be composed of pretrained weights except for the very first input layer, which is no longer the same shape as the pretrained input layer. The input layer will remain initialized by random weights.
Number of Classes
To load a pretrained YOLOv5s model with 10 output classes rather than the default 80:
In this case the model will be composed of pretrained weights except for the output layers, which are no longer the same shape as the pretrained output layers. The output layers will remain initialized by random weights.
Force Reload
If you run into problems with the above steps, setting
force_reload=True
may help by discarding the existing cache and force a fresh download of the latest YOLOv5 version from PyTorch Hub.Screenshot Inference
To run inference on your desktop screen:
Multi-GPU Inference
YOLOv5 models can be be loaded to multiple GPUs in parallel with threaded inference:
Training
To load a YOLOv5 model for training rather than inference, set
autoshape=False
. To load a model with randomly initialized weights (to train from scratch) usepretrained=False
. You must provide your own training script in this case. Alternatively see our YOLOv5 Train Custom Data Tutorial for model training.Base64 Results
For use with API services. See https://github.com/ultralytics/yolov5/pull/2291 and Flask REST API example for details.
Cropped Results
Results can be returned and saved as detection crops:
Pandas Results
Results can be returned as Pandas DataFrames:
Pandas Output (click to expand)
```python print(results.pandas().xyxy[0]) # xmin ymin xmax ymax confidence class name # 0 749.50 43.50 1148.0 704.5 0.874023 0 person # 1 433.50 433.50 517.5 714.5 0.687988 27 tie # 2 114.75 195.75 1095.0 708.0 0.624512 0 person # 3 986.00 304.00 1028.0 420.0 0.286865 27 tie ```Sorted Results
Results can be sorted by column, i.e. to sort license plate digit detection left-to-right (x-axis):
Box-Cropped Results
Results can be returned and saved as detection crops:
JSON Results
Results can be returned in JSON format once converted to
.pandas()
dataframes using the.to_json()
method. The JSON format can be modified using theorient
argument. See pandas.to_json()
documentation for details.JSON Output (click to expand)
```json [ {"xmin":749.5,"ymin":43.5,"xmax":1148.0,"ymax":704.5,"confidence":0.8740234375,"class":0,"name":"person"}, {"xmin":433.5,"ymin":433.5,"xmax":517.5,"ymax":714.5,"confidence":0.6879882812,"class":27,"name":"tie"}, {"xmin":115.25,"ymin":195.75,"xmax":1096.0,"ymax":708.0,"confidence":0.6254882812,"class":0,"name":"person"}, {"xmin":986.0,"ymin":304.0,"xmax":1028.0,"ymax":420.0,"confidence":0.2873535156,"class":27,"name":"tie"} ] ```Custom Models
This example loads a custom 20-class VOC-trained YOLOv5s model
'best.pt'
with PyTorch Hub.TensorRT, ONNX and OpenVINO Models
PyTorch Hub supports inference on most YOLOv5 export formats, including custom trained models. See TFLite, ONNX, CoreML, TensorRT Export tutorial for details on exporting models.
💡 ProTip: TensorRT may be up to 2-5X faster than PyTorch on GPU benchmarks 💡 ProTip: ONNX and OpenVINO may be up to 2-3X faster than PyTorch on CPU benchmarks
Environments
YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):
Status
If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training, validation, inference, export and benchmarks on MacOS, Windows, and Ubuntu every 24 hours and on every commit.