Open glenn-jocher opened 4 years ago
how can i select my GPU to load my model use hub? i find my model load in one card
how can i select my GPU to load my model use hub? i find my model load in one card
change hubconf.py line 54 to this:
device = select_device('0' if torch.cuda.is_available() else 'cpu') if device is None else select_device(device)
this sovled my problem
@hoangnkust
thank sir, i changed CUDA version. now it's working normally
To what CUDA version did you change? I'm having the same issue.
detect.py
works well, it will output image with bounding boxes, but when using torch.hub, no bounding boxes are present in exact same image.
I have download torch CUDA 11.1
I have Nvidia Drivers: 466.11 that includes CUDA 11.3.70
Edit It seems downgrading to use torch CUDA 10.2 fixed the issue. Not sure if this is an bug, as CUDA should be backwards compatible.
i tried
model = torch.hub.load('ultralytics/yolov5', 'yolov5s')
or model = torch.hub.load('ultralytics/yolov5', 'yolov5s', pretrained=True)
or model = torch.hub.load('ultralytics/yolov5', 'yolov5s', force_reload=True)
or model = torch.hub.load('ultralytics/yolov5', 'yolov5s', pretrained=True, force_reload=True)
But all of them don't work, I wanna know why. I think maybe I download it from China, so the network caused it.
And I try to download it from url through chrome like this.
Thus there is another question, if I download this model in this way, how can i load the model and continue to the inference
Thanks for your attention and looking forward to your reply.
i tried
model = torch.hub.load('ultralytics/yolov5', 'yolov5s')
or model = torch.hub.load('ultralytics/yolov5', 'yolov5s', pretrained=True)
or model = torch.hub.load('ultralytics/yolov5', 'yolov5s', force_reload=True)
or model = torch.hub.load('ultralytics/yolov5', 'yolov5s', pretrained=True, force_reload=True)
But all of them don't work, I wanna know why. I think maybe I download it from China, so the network caused it.
And I try to download it from url through chrome like this.
Thus there is another question, if I download this model in this way, how can i load the model and continue to the inference
Thanks for your attention and looking forward to your reply.
you must check your network and you can try this:
model = torch.hub.load('ultralytics/yolov5', 'custom', path='path/to/best.pt') # default
model = torch.hub.load('path/to/yolov5', 'custom', path='path/to/best.pt', source='local') # local repo
just see the tutorial
i tried model = torch.hub.load('ultralytics/yolov5', 'yolov5s') or model = torch.hub.load('ultralytics/yolov5', 'yolov5s', pretrained=True) or model = torch.hub.load('ultralytics/yolov5', 'yolov5s', force_reload=True) or model = torch.hub.load('ultralytics/yolov5', 'yolov5s', pretrained=True, force_reload=True) But all of them don't work, I wanna know why. I think maybe I download it from China, so the network caused it. And I try to download it from url through chrome like this. Thus there is another question, if I download this model in this way, how can i load the model and continue to the inference Thanks for your attention and looking forward to your reply.
you must check your network and you can try this:
model = torch.hub.load('ultralytics/yolov5', 'custom', path='path/to/best.pt') # default model = torch.hub.load('path/to/yolov5', 'custom', path='path/to/best.pt', source='local') # local repo
just see the tutorial
Thanks for your answer . I tried your method. But it still doesn't work. The situation is here
My network is ok. And I download the yolov5s.pt like this
My problem now is how to load this model to inference through torch.hub.load() or any other method
I read some Chinese blog about tourch.hub.load() , in my computer I found that
I guess maybe I can copy the yolov5s.py to the directory or somewhere, and run the code "model = torch.hub.load('ultralytics/yolov5', 'yolov5s')" . But I'm not sure what excatly should I do. If you know, please tell me as detailed as possible. I will appreciate it a lot.
i tried model = torch.hub.load('ultralytics/yolov5', 'yolov5s') or model = torch.hub.load('ultralytics/yolov5', 'yolov5s', pretrained=True) or model = torch.hub.load('ultralytics/yolov5', 'yolov5s', force_reload=True) or model = torch.hub.load('ultralytics/yolov5', 'yolov5s', pretrained=True, force_reload=True) But all of them don't work, I wanna know why. I think maybe I download it from China, so the network caused it. And I try to download it from url through chrome like this. Thus there is another question, if I download this model in this way, how can i load the model and continue to the inference Thanks for your attention and looking forward to your reply.
you must check your network and you can try this:
model = torch.hub.load('ultralytics/yolov5', 'custom', path='path/to/best.pt') # default model = torch.hub.load('path/to/yolov5', 'custom', path='path/to/best.pt', source='local') # local repo
just see the tutorial
Thanks for your answer . I tried your method. But it still doesn't work. The situation is here
My network is ok. And I download the yolov5s.pt like this
My problem now is how to load this model to inference through torch.hub.load() or any other method
I read some Chinese blog about tourch.hub.load() , in my computer I found that
I guess maybe I can copy the yolov5s.py to the directory or somewhere, and run the code "model = torch.hub.load('ultralytics/yolov5', 'yolov5s')" . But I'm not sure what excatly should I do. If you know, please tell me as detailed as possible. I will appreciate it a lot.
I think you should take a good look at the documents under this issue
I guess maybe I can copy the yolov5s.py to the directory or somewhere, and run the code "model = torch.hub.load('ultralytics/yolov5', 'yolov5s')" . But I'm not sure what excatly should I do. If you know, please tell me as detailed as possible. I will appreciate it a lot.
I think you should take a good look at the documents under this issue
If you are not willing to answer my question, please do not comment and criticize at will.
@achel-x instructions are indicated in this tutorial. Weights are available worldwide.
@achel-x instructions are indicated in this tutorial. Weights are available worldwide. thank you for your reply, I solve it by the following steps first i download yolov5s.pt from the url then I copy it to ~./cache/torch/.../..._master (the diretory) last I run the code : model = torch.hub.load('ultralytics/yolov5', 'yolov5s', pretrained=True) it works.
I just don't know why it doens't work directly , because i also can't get the picture from the original code according to the tutorial, but a indirectly way read it in local. that means i can't load the model and the test picture by https, instead i had to load them locally. plus, my network is ok, and when i download the model from url through vpn, the speed is faster than not.
@glenn-jocher i am using this code but it does not working. i want to show the video at runtime with bounding boxes if the object is detected
import cv2 import numpy as np import torch from PIL import Image
model = torch.hub.load('C:/Users/Hassam/.cache/torch/hub/ultralytics_yolov5_master', 'custom',force_reload=True, path = 'best.pt', source='local')
PIL image
cap = cv2.VideoCapture(r'E:\HMBD1\hmdb51_org\smoke\smoke\American_History_X_smoke_u_nm_np1_fr_med_43.avi') # OpenCV image (BGR to RGB)
batch of images
while cap.isOpened(): ret, frame = cap.read()
results = model(frame, size=640) if cv2.waitKey(10) & 0xFF == ord('q'): break
cap.release() cv2.destroyAllWindows()
results.save()
@glenn-jocher i am using this code but it does not working. i want to show the video at runtime with bounding boxes if the object is detected
import cv2 import numpy as np import torch from PIL import Image
model = torch.hub.load('C:/Users/Hassam/.cache/torch/hub/ultralytics_yolov5_master', 'custom',force_reload=True, path = 'best.pt', source='local')
PIL image
cap = cv2.VideoCapture(r'E:\HMBD1\hmdb51_org\smoke\smoke\American_History_X_smoke_u_nm_np1_fr_med_43.avi') # OpenCV image (BGR to RGB)
batch of images
while cap.isOpened(): ret, frame = cap.read()
results = model(frame, size=640) if cv2.waitKey(10) & 0xFF == ord('q'): break
cap.release() cv2.destroyAllWindows()
results.save()
Yo @Hassamarshad , I'm having the same intension as well. Not sure whether do my method is correct or not, but please see below for my code. Somehow, it gave me the results.
import torch
import cv2
# Model
model = torch.hub.load('ultralytics/yolov5', 'custom', path='01_model/yolov5x.pt')
model = model.autoshape()
# Load the image/video. 0 for using webcam
input_image = cv2.VideoCapture(r'00_test_material\0802_F.mp4')
while input_image.isOpened(): # While the source is true/open
ret, read_image = input_image.read() # Read the source
# ----- Inference -----
results = model(read_image)
""" Sample output:
'<models.common.Detections object at 0x000001CA40D79430>' """
# ----- Results ----- print();show();save();crop();render();pandas();tolist()
results.print()
"""
Sample output:
Speed: 2.0ms pre-process, 825.8ms inference, 4.0ms NMS per image at shape (1, 3, 384, 640)
"""
print(results.pandas().xyxy[0]) # item_no; xmin; ymin; xmax; ymax; confidence; class; name
results.render() # updates/render results into the source
# print(f'prediction: {results.pred}')
# ----- show the results -----
cv2.namedWindow("result", cv2.WINDOW_NORMAL)
cv2.imshow('result', read_image)
cv2.waitKey(1)
# Stop the program
key = cv2.waitKey(1)
if key == ord('w'): # Pause
cv2.waitKey()
elif key == ord('q'): # Stop
break
How to use real-time game screen as input point This is my main code↓ ’‘’ import torch from PIL import Image from Test import Windows
model = torch.hub.load(r'C:\Users\xxx\Desktop\yolov5-master', \ 'custom', path='best.pt', source='local') result = model(Windows()) result.print() ‘’‘
This is my screenshot code👇 ’‘’ import os import numpy as np import cv2 import win32gui import time from mss import mss def Windows(): os.system('calc') sct = mss() xx = 1 tstart = time.time() while xx < 10000: hwnd = win32gui.FindWindow(None, 'calc') left_x, top_y, right_x, bottom_y = win32gui.GetWindowRect(hwnd) bbox = {'top': top_y, 'left': left_x, 'width': right_x - left_x, 'height': bottom_y - top_y} screen = sct.grab(bbox) scr = np.array(screen)
cv2.imshow('window', scr)
if cv2.waitKey(25) & 0xFF == ord('q'):
cv2.destroyAllWindows()
break
‘’‘
I want to achieve real-time identification of the game screen and output coordinates, but now stuck in to the game screen as the output of this step, seeking answers, thank you
@kkive to run inference on your desktop screenshot:
import torch
from PIL import ImageGrab
# Model
model = torch.hub.load('ultralytics/yolov5', 'yolov5s')
# Image
img = ImageGrab.grab() # take the screenshot
# Inference
results = model(img)
I have the issue Cache may be out of date, try force_reload=True
when using aws sam lambda
with docker container.
LOG
START RequestId: ..Version: $LATEST
Downloading: "https://github.com/ultralytics/yolov5/archive/master.zip" to /root/.cache/torch/hub/master.zip
Downloading https://ultralytics.com/assets/Arial.ttf to /root/.cache/torch/hub/ultralytics_yolov5_master/Arial.ttf...
100%|██████████| 755k/755k [00:00<00:00, 2.62MB/s]
raise Exception(s) from eb/ultralytics_yolov5_master/hubconf.py", line 65, in _createcom/ultralytics/yolov5/issues/36 for help.
END RequestId: ...
{
"errorMessage":"Cache may be out of date, try `force_reload=True`. See https://docs.ultralytics.com/yolov5/tutorials/pytorch_hub_model_loading for help.",
"errorType":"Exception",
"stackTrace":[
" File \"/var/task/app.py\", line 16, in lambda_handler\n model = torch.hub.load(\\'ultralytics/yolov5\\', \\'custom\\',\n",
" File \"/var/task/torch/hub.py\", line 339, in load\n model = _load_local(repo_or_dir, model, *args, **kwargs)\n",
" File \"/var/task/torch/hub.py\", line 368, in _load_local\n model = entry(*args, **kwargs)\n",
" File \"/root/.cache/torch/hub/ultralytics_yolov5_master/hubconf.py\", line 70, in custom\n return _create(path, autoshape=autoshape, verbose=verbose, device=device)\n",
" File \"/root/.cache/torch/hub/ultralytics_yolov5_master/hubconf.py\", line 65, in _create\n raise Exception(s) from e\n"
]
}
app.py
def lambda_handler(event, context):
image_bytes = event['body'].encode('utf-8')
image = Image.open(BytesIO(base64.b64decode(image_bytes))).convert(mode='L')
image.save('image.jpg')
model = torch.hub.load('ultralytics/yolov5', 'custom',
path='yolov5_nfl_helmet_trained.pt',
force_reload=True).autoshape()
model.conf = 0.4
model.eval()
It works on the local machine, so I guess it's something related to the container. Someone has some idea?
@francesco-taioli I'm not sure, but your usage is not consistent with the tutorial above, i.e. you have a .autoshape() call that is not shown in our tutorial. You might want to start with aligning your implementation with the tutorial.
@glenn-jocher thank you for the rapid response. Unfortunately, removing .autoshape()
doesn't change anything, the error remain
It could be really nice, if the example listed, shows how to actually use the results you get from the inference. results.print(), shows the right info (as text printed to stdout), but it's super non obvious, how to easily check if you had any matches
I see people keep saying just use print, when people are asking really similar questions in this issue, which does nothing useful, unless you are an experienced torch/yolov5 user.
After spending like 8 hours trying to do something as simple as just getting the bounding box coordinates from the inference, this is what i came up with, but I am sure this is not the intended way, but as other in this issue, I really hit a brick wall on how to use this.
# Numpy image
results = model(cv.cvtColor(np_image, cv.COLOR_BGR2RGB), size=800)
So the first try i did as suggested: print(results) just returns <models.common.Detections object at 0x000001D80C6B6EB0>, so that is not super useful
Next try, as shown in the example: print(results.xyxy[0]) this gave me: [tensor([], device='cuda:0', size=(0, 6))] # When not matching anything [tensor([[1.34480e+03, 1.37360e+03, 1.38960e+03, 1.42960e+03, 2.51465e-01, 0.00000e+00]], device='cuda:0')] # When i get a match
Ok, looks useful, but how do you get the coordinates of the bounding boxes from that tensor object/list?, luckily the stuff i need it for is either there or not, not multiple results/detections, so this is what i came up with, but this feels wrong even though it sort of works, but not really:
if len(results.xywh[0]) == 1:
# We got one match?!?, who knows, seems to work while testing
x = results.xywh[0][0][0]
y = results.xywh[0][0][1]
print(f"Found object at x: {x}, y: {y}")
At least now it seems to return expected x y coordinates for the matched object: Found object at x: 1325.5999755859375, y: 508.79998779296875
Oh nice, would assume, this is a float, but no: print(type(x)), gives <class 'torch.Tensor'>, so not really much closer to getting the results, it's super unclear how you use the Tensor object, looking at the source for yolov5 just makes you more confused, but at least it seems you can cast x as an int: x1 = int(x), which has the expected value.
It would be really nice if this was shown in the official way, the missing pieces from this example, in this issue, is imho: 1) Show how to detect if there was any matches 2) Show how to get the coords as normal primitive types for each "result/match"
Right now I still have no idea if this is the way you are supposed to use the results you get from the inference, but at least it's semi working.
@mnj the tutorial above shows everything you need, results
object contains results expressed in a variety of formats, i.e.:
results.xyxy[0] # img1 predictions (tensor)
EDIT: also you can apply basic python to any Tensor, i.e. len()
and Tensor element, i.e. float()
, but this is beyond the scope of this tutorial as it's expected the user has a basic knowledge of PyTorch and Python.
Hi @glenn-jocher. I know that there is a tutorial about the simplified inference, but is there something about training the model from the PyTorch Hub? I see that it is possible to load the model for training, but once we've loaded it, how to train it?
Also, is it possible to export this model to onnx? Or is it still required to clone the repo to do it?
@augustoolucas for training see YOLOv5 Train Custom Data tutorial. For ONNX export see YOLOv5 TorchScript, ONNX, CoreML Export tutorial.
@glenn-jocher but these tutorials requires that we clone the entire repository. My question was about training or exporting the model loaded from the Pytorch Hub. So, I suppose that it is not possible, right?
@augustoolucas yes YOLOv5 uses the YOLOv5 repository. I'm not sure I'm understanding your question. You can always create your own scripts if you'd rather not use the YOLOv5 repo.
@glenn-jocher Ok, I'm sorry, I'll try to elaborate. There are two examples on how to do the inference. The first one is by using the model loaded from the PyTorch Hub, which doesn't require this repository. The second is by using the detect.py code, which requires the repo. But for training, there is just one example, the one using the train.py code and, therefore, requiring the cloned repo. So, my first question was whether there was some example showing how to train the model loaded from the PyTorch Hub.
@augustoolucas this is the YOLOv5 repository. The YOLOv5 repository uses train.py for training as shown in the YOLOv5 tutorials: https://github.com/ultralytics/yolov5/blob/master/train.py
Naturally all YOLOv5 files and functions are within the YOLOv5 repo, we don't have any other content other than what is inside the repository. You can find other general training tutorials elsewhere, i.e. https://pytorch.org/tutorials/beginner/blitz/cifar10_tutorial.html
There is such a piece of code in the common.py file: https://github.com/ultralytics/yolov5/blob/808bcad3bb952f4976aca63f95af8855bc227090/models/common.py#L362 The t value is a tuple containing 3 elements, for example:
(10.732412338256836, 15.167951583862305, 0.8959770202636719)
I found the meaning of the 3 elements in this tuple,they are Pre-process
, Inference
, and Post-process
.
https://github.com/ultralytics/yolov5/blob/808bcad3bb952f4976aca63f95af8855bc227090/models/common.py#L307-L343
@Zengyf-CVer yes that's correct, FPS = 1000.0 / self.t[1]
@glenn-jocher
Thank you very much for your reply
please can u fix this ?
Error : Exception: Cache may be out of date, try force_reload=True
. See https://docs.ultralytics.com/yolov5/tutorials/pytorch_hub_model_loading for help.
@vh4 there's nothing to fix, FileNotFoundError
indicates that you are pointing torch.hub.load()
to an incorrect path
. If you are running this command from inside yolov5/
then you should omit the initial yolov5/
from your path.
@glenn-jocher thank you very much
The result is different when
@JongWooBAE 👋 hi, thanks for letting us know about this possible problem with YOLOv5 🚀. We've created a few short guidelines below to help users provide what we need in order to get started investigating a possible problem.
When asking a question, people will be better able to provide help if you provide code that they can easily understand and use to reproduce the problem. This is referred to by community members as creating a minimum reproducible example. Your code that reproduces the problem should be:
In addition to the above requirements, for Ultralytics to provide assistance your code should be:
git pull
or git clone
a new copy to ensure your problem has not already been resolved by previous commits.If you believe your problem meets all of the above criteria, please close this issue and raise a new one using the 🐛 Bug Report template and providing a minimum reproducible example to help us better understand and diagnose your problem.
Thank you! 😃
No matter what I change, it would always lead to a problem with my 'path'. What do I have to change to get it to work? Am doing it on windows10,python ver 3.8.5, conda ver 4.9.2 & torch ver 1.9.1
No matter what I change, it would always lead to a problem with my 'path'. What do I have to change to get it to work? Am doing it on windows10,python ver 3.8.5, conda ver 4.9.2 & torch ver 1.9.1
I'm having custom model as well, and this is how I put it.
@rlalpha @justAyaan @MohamedAliRashad this PyTorch Hub tutorial is now updated to reflect the simplified inference improvements in PR #1153. It's very simple now to load any YOLOv5 model from PyTorch Hub and use it directly for inference on PIL, OpenCV, Numpy or PyTorch inputs, including for batched inference. Reshaping and NMS are handled automatically. Example script is shown in above tutorial.
Hi and thanks for this great repo. Autoshape forward method allows using numpy arrays as input for inference, however numpy is not in the input formats for training the model. What is the best way if I would like to use them also in training phase? Do I need to customize training code for numpy inputs? Or should I convert training numpy arrays to jpg/png,.. then train custom model on images and then use numpy arrays only for inference?
No matter what I change, it would always lead to a problem with my 'path'. What do I have to change to get it to work? Am doing it on windows10,python ver 3.8.5, conda ver 4.9.2 & torch ver 1.9.1
I'm having custom model as well, and this is how I put it.
I found my ultralytics_yolov5_master, but it is in a .cache folder that is not within the same folder as my best.pt, should i just cut and paste them together?
No matter what I change, it would always lead to a problem with my 'path'. What do I have to change to get it to work? Am doing it on windows10,python ver 3.8.5, conda ver 4.9.2 & torch ver 1.9.1
I'm having custom model as well, and this is how I put it.
I found my ultralytics_yolov5_master, but it is in a .cache folder that is not within the same folder as my best.pt, should i just cut and paste them together?
From my understanding of the codes. The first parameter 'repo_or_dir' you can define it as 'repo_owner/repo_name' from Github (online) or '/some/local/path' in your computer (local)
If you wish to go with online repo, then during the first run, it will clone the repo and store it in cache. So in the subsequence run you don't have to download it again. In this method you don't need to refine the 'source'.
Else if, you wish to read it from your local directory, you can clone this and save it in your computer. But here, you need to define the 'source' as local
i have loaded my custom model trained for max "epoch=50" then to its not detecting the ima ge
@sahil-dhuri see Tips for Best Training Results tutorial below for steps to improve your results.
@sahil-dhuri hiding your comment as unreadable. Please raise a new issue if you are having individual issues.
@glenn-jocher I ran the following code and found that the output was not what I wanted:
import torch
import numpy as np
from PIL import Image
imgPath = './imgs/Millenial-at-work.jpg'
img = Image.open(imgPath).resize((640, 640), Image.ANTIALIAS)
# if (img.mode != 'RGB'):
# img = img.convert("RGB")
img = torch.tensor(np.array(img)).permute((2,0,1)).unsqueeze(0)
img = img.float() / 255
model1 = torch.hub.load('ultralytics/yolov5', 'yolov5s', pretrained=True)
with torch.no_grad():
output = model1(img)
output0 = output[0]
print(output0.shape)
print(output0)
This seems to be inconsistent with your results. What is the reason? This is your result:
Hii, Thanks for great work and releasing the yolov5n model. However, I was trying to check the memory consumption of loading the yolov5 weights. So, loading the weights with GPU the yolov5s model takes around 2.6 GB and on cpu it took only 72 MB. Can anyone explain the logic behind this and how can reduce the memory consumption for attemp_load function. Thank you
@Zengyf-CVer none of your extra code is required, just pass an image path. I can't make it any easier.
results = model('image.jpg')
@glenn-jocher You may not understand what I mean, I used almost the same code as you, but the results are different. Your result output is the prediction tensor after NMS processing, and my result is the prediction tensor before NMS processing. I used the latest code, and your result is v3.0. I guess you may have modified the previous code, causing output problems. You can give it a try. This is my program:
import torch
import numpy as np
from PIL import Image
imgPath = './imgs/Millenial-at-work.jpg'
img = Image.open(imgPath).resize((640, 640), Image.ANTIALIAS)
# if (img.mode != 'RGB'):
# img = img.convert("RGB")
img = torch.tensor(np.array(img)).permute((2,0,1)).unsqueeze(0)
img = img.float() / 255
model1 = torch.hub.load('ultralytics/yolov5', 'yolov5s', pretrained=True)
with torch.no_grad():
output = model1(img)
output0 = output[0]
print(output0.shape)
print(output0)
This is my output:
This is your result:
@Zengyf-CVer we don't assist in debugging custom code. Follow YOLOv5 PyTorch Hub tutorial for correct usage:
@glenn-jocher Alright, thank you very much.
Hi @glenn-jocher Is there a way to hide the confidence score for custom model load. model = torch.hub.load('ultralytics/yolov5', 'custom', path_or_model='best.pt')
model.conf = 0.25 # NMS confidence threshold model.iou = 0.45 # NMS IoU threshold
Here I am looking for something similar to hide the confidence sccore. I have used model.hide_conf = True
But it did not work. Can you please help me put here. Also If I want to customize some other parameter do yolov5 has a list of it.
@animeshkalita82 hi, thank you for your feature suggestion on how to improve YOLOv5 🚀! All settable model parameters are displayed in above tutorial:
model.conf = 0.25 # NMS confidence threshold
model.iou = 0.45 # NMS IoU threshold
model.classes = None # (optional list) filter by class, i.e. = [0, 15, 16] for persons, cats and dogs
model.multi_label = False # NMS multiple labels per box
model.max_det = 1000 # maximum number of detections per image
results = model(imgs, size=320) # custom inference size
The fastest and easiest way to incorporate your ideas into the official codebase is to submit a Pull Request (PR) implementing your idea, and if applicable providing before and after profiling/inference/training results to help us understand the improvement your feature provides. This allows us to directly see the changes in the code and to understand how they affect workflows and performance.
Please see our ✅ Contributing Guide to get started.
I just found out about the new v6 and the nano models, and figured I could just load them from the hub as well,
however it seems that neither "yolov5n" nor "yolov5n6" or any of the v6 variants are in the hub yet (getting a RuntimeError: Cannot find callable yolov5n in hubconf
).
Any ETA when they will be added? If not, can I simply load the yolov6n.pt with a similar call, while keeping the rest of the code the same (i.e. simply calling model(img) to get results)?
Thanks!
EDIT: Deleting the cache directory (/home/$USER/.cache/torch/hub/ultralytics_yolov5_master
) and calling the torch.hub.load again makes it work :)
📚 This guide explains how to load YOLOv5 🚀 from PyTorch Hub https://pytorch.org/hub/ultralytics_yolov5. See YOLOv5 Docs for additional details. UPDATED 26 March 2023.
Before You Start
Install requirements.txt in a Python>=3.7.0 environment, including PyTorch>=1.7. Models and datasets download automatically from the latest YOLOv5 release.
💡 ProTip: Cloning https://github.com/ultralytics/yolov5 is not required 😃
Load YOLOv5 with PyTorch Hub
Simple Example
This example loads a pretrained YOLOv5s model from PyTorch Hub as
model
and passes an image for inference.'yolov5s'
is the lightest and fastest YOLOv5 model. For details on all available models please see the README.Detailed Example
This example shows batched inference with PIL and OpenCV image sources.
results
can be printed to console, saved toruns/hub
, showed to screen on supported environments, and returned as tensors or pandas dataframes.For all inference options see YOLOv5
AutoShape()
forward method: https://github.com/ultralytics/yolov5/blob/30e4c4f09297b67afedf8b2bcd851833ddc9dead/models/common.py#L243-L252Inference Settings
YOLOv5 models contain various inference attributes such as confidence threshold, IoU threshold, etc. which can be set by:
Device
Models can be transferred to any device after creation:
Models can also be created directly on any
device
:💡 ProTip: Input images are automatically transferred to the correct model device before inference.
Silence Outputs
Models can be loaded silently with
_verbose=False
:Input Channels
To load a pretrained YOLOv5s model with 4 input channels rather than the default 3:
In this case the model will be composed of pretrained weights except for the very first input layer, which is no longer the same shape as the pretrained input layer. The input layer will remain initialized by random weights.
Number of Classes
To load a pretrained YOLOv5s model with 10 output classes rather than the default 80:
In this case the model will be composed of pretrained weights except for the output layers, which are no longer the same shape as the pretrained output layers. The output layers will remain initialized by random weights.
Force Reload
If you run into problems with the above steps, setting
force_reload=True
may help by discarding the existing cache and force a fresh download of the latest YOLOv5 version from PyTorch Hub.Screenshot Inference
To run inference on your desktop screen:
Multi-GPU Inference
YOLOv5 models can be be loaded to multiple GPUs in parallel with threaded inference:
Training
To load a YOLOv5 model for training rather than inference, set
autoshape=False
. To load a model with randomly initialized weights (to train from scratch) usepretrained=False
. You must provide your own training script in this case. Alternatively see our YOLOv5 Train Custom Data Tutorial for model training.Base64 Results
For use with API services. See https://github.com/ultralytics/yolov5/pull/2291 and Flask REST API example for details.
Cropped Results
Results can be returned and saved as detection crops:
Pandas Results
Results can be returned as Pandas DataFrames:
Pandas Output (click to expand)
```python print(results.pandas().xyxy[0]) # xmin ymin xmax ymax confidence class name # 0 749.50 43.50 1148.0 704.5 0.874023 0 person # 1 433.50 433.50 517.5 714.5 0.687988 27 tie # 2 114.75 195.75 1095.0 708.0 0.624512 0 person # 3 986.00 304.00 1028.0 420.0 0.286865 27 tie ```Sorted Results
Results can be sorted by column, i.e. to sort license plate digit detection left-to-right (x-axis):
Box-Cropped Results
Results can be returned and saved as detection crops:
JSON Results
Results can be returned in JSON format once converted to
.pandas()
dataframes using the.to_json()
method. The JSON format can be modified using theorient
argument. See pandas.to_json()
documentation for details.JSON Output (click to expand)
```json [ {"xmin":749.5,"ymin":43.5,"xmax":1148.0,"ymax":704.5,"confidence":0.8740234375,"class":0,"name":"person"}, {"xmin":433.5,"ymin":433.5,"xmax":517.5,"ymax":714.5,"confidence":0.6879882812,"class":27,"name":"tie"}, {"xmin":115.25,"ymin":195.75,"xmax":1096.0,"ymax":708.0,"confidence":0.6254882812,"class":0,"name":"person"}, {"xmin":986.0,"ymin":304.0,"xmax":1028.0,"ymax":420.0,"confidence":0.2873535156,"class":27,"name":"tie"} ] ```Custom Models
This example loads a custom 20-class VOC-trained YOLOv5s model
'best.pt'
with PyTorch Hub.TensorRT, ONNX and OpenVINO Models
PyTorch Hub supports inference on most YOLOv5 export formats, including custom trained models. See TFLite, ONNX, CoreML, TensorRT Export tutorial for details on exporting models.
💡 ProTip: TensorRT may be up to 2-5X faster than PyTorch on GPU benchmarks 💡 ProTip: ONNX and OpenVINO may be up to 2-3X faster than PyTorch on CPU benchmarks
Environments
YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):
Status
If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training, validation, inference, export and benchmarks on MacOS, Windows, and Ubuntu every 24 hours and on every commit.