ultralytics / yolov5

YOLOv5 🚀 in PyTorch > ONNX > CoreML > TFLite
https://docs.ultralytics.com
GNU Affero General Public License v3.0
50.81k stars 16.37k forks source link

loading yolov5 from local directory on spyder IDE #1640

Closed docterstrang closed 3 years ago

docterstrang commented 3 years ago

❔Question

I’m making a web application on Django that will detect the bikers without helmet and by extracting their number plates violators will be fined. I have trained my custom bike detector using yolov5 on google colab and have my weights file AS best.pt now want to run that model in spyder IDE.

How can i load yolov5 pytorch from local directory??

it is loading online from repo with this code "torch.hub.load('ultralytics/yolov5', 'yolov5s', pretrained=True).fuse().autoshape()" but i need to run it from local direcotry. I’m stucked for last one week and unable to get help from anywhere. can you please help??

Additional context

github-actions[bot] commented 3 years ago

Hello @docterstrang, thank you for your interest in 🚀 YOLOv5! Please visit our ⭐️ Tutorials to get started, where you can find quickstart guides for simple tasks like Custom Data Training all the way to advanced concepts like Hyperparameter Evolution.

If this is a 🐛 Bug Report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you.

If this is a custom training ❓ Question, please provide as much information as possible, including dataset images, training logs, screenshots, and a public link to online W&B logging if available.

For business inquiries or professional support requests please visit https://www.ultralytics.com or email Glenn Jocher at glenn.jocher@ultralytics.com.

Requirements

Python 3.8 or later with all requirements.txt dependencies installed, including torch>=1.7. To install run:

$ pip install -r requirements.txt

Environments

YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

CI CPU testing

If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training (train.py), testing (test.py), inference (detect.py) and export (export.py) on MacOS, Windows, and Ubuntu every 24 hours and on every commit.

glenn-jocher commented 3 years ago

@docterstrang you probably want to use PyTorch Hub for loading a custom YOLOv5 model in a separate project. See the PyTorch Hub tutorial: https://docs.ultralytics.com/yolov5

The section on loading a custom model provides this example:

Custom Models

To load a custom model, first load a PyTorch Hub model of the same architecture with the same number of classes, and then load a custom state dict into it. This examples loads a custom 10-class YOLOv5s model 'yolov5s_10cls.pt':

model = torch.hub.load('ultralytics/yolov5', 'yolov5s', classes=10)

ckpt = torch.load('yolov5s_10cls.pt')  # load checkpoint
model.load_state_dict(ckpt['model'].state_dict())  # load state_dict
model.names = ckpt.names  # define class names
docterstrang commented 3 years ago

No I don't want to use pytorch hub I want to load yolov5 model from my local directory that is already downloaded in my system

On Tue, 8 Dec 2020, 8:07 pm Glenn Jocher, notifications@github.com wrote:

@docterstrang https://github.com/docterstrang you probably want to use PyTorch Hub for loading a custom YOLOv5 model in a separate project. See the PyTorch Hub tutorial: https://docs.ultralytics.com/yolov5

The section on loading a custom model provides this example: Custom Models

To load a custom model, first load a PyTorch Hub model of the same architecture with the same number of classes, and then load a custom state dict into it. This examples loads a custom 10-class YOLOv5s model ' yolov5s_10cls.pt':

model = torch.hub.load('ultralytics/yolov5', 'yolov5s', classes=10) ckpt = torch.load('yolov5s_10cls.pt')['model'] # load checkpointmodel.load_state_dict(ckpt.state_dict()) # load state_dictmodel.names = ckpt.names # define class names

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/ultralytics/yolov5/issues/1640#issuecomment-740675393, or unsubscribe https://github.com/notifications/unsubscribe-auth/AL4LUT3KOJH6OENJ27HMZETSTY6J3ANCNFSM4UR3WUGQ .

glenn-jocher commented 3 years ago

@docterstrang then what's the problem? You just load your model:

ckpt = torch.load('yolov5s_10cls.pt')  # load checkpoint
docterstrang commented 3 years ago

@glenn-jocher when i run inference using my model this error occurs

modelerror

glenn-jocher commented 3 years ago

@docterstrang please follow the custom training tutorial to train a custom model: https://docs.ultralytics.com/yolov5/tutorials/train_custom_data

Once you have trained your custom model inference is easy: python detect.py --weights custom_model.pt

docterstrang commented 3 years ago

@glenn-jocher i've trained my custom model and now i want to run inference using Spyder IDE not using command line. please help. thank you!

glenn-jocher commented 3 years ago

@docterstrang see PyTorch Hub tutorial: https://docs.ultralytics.com/yolov5

github-actions[bot] commented 3 years ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

1chimaruGin commented 3 years ago

@glenn-jocher when i run inference using my model this error occurs modelerror

I'm using yolov5 hub for my Flask App.

ckpt = torch.load('D:/yolov5-master/biker_m.pt')
model.load(ckpt['model'])
model.eval()
result = model(img)

After that, you might also need post-processing stuff to show image.

glenn-jocher commented 3 years ago

@1chimaruGin you can load YOLOv5 custom trained models very easily, see the PyTorch Hub Tutorial: https://github.com/ultralytics/yolov5#pytorch-hub

Custom Models

This example loads a custom 20-class VOC-trained YOLOv5s model 'yolov5s_voc_best.pt' with PyTorch Hub.

model = torch.hub.load('ultralytics/yolov5', 'custom', path_or_model='yolov5s_voc_best.pt')  # custom model
rullisubekti commented 3 years ago

can i change model = torch.hub.load('ultralytics/yolov5', 'custom', path_or_model='yolov5s_voc_best.pt') to model = torch.hub.load('my local directory/yolov5', 'custom', path_or_model='yolov5s_voc_best.pt') ?

weegary commented 2 years ago

can i change model = torch.hub.load('ultralytics/yolov5', 'custom', path_or_model='yolov5s_voc_best.pt') to model = torch.hub.load('my local directory/yolov5', 'custom', path_or_model='yolov5s_voc_best.pt') ?

I find that should remain "ultralytics/yolov5" because the method will come to github grabbing some codes, and "ultralytics" is used as repo_owner and "yolov5" is repo_name. image

AI-P-K commented 2 years ago

@glenn-jocher the problem with torch.hub.load is that it takes 2 seconds to load the model. Is there any way we could save models and use torch.load(model, map_location="")?

glenn-jocher commented 2 years ago

@AI-P-K 👋 Hello! Thanks for asking about handling inference results. YOLOv5 🚀 PyTorch Hub models allow for simple model loading and inference in a pure python environment without using detect.py.

Simple Inference Example

This example loads a pretrained YOLOv5s model from PyTorch Hub as model and passes an image for inference. 'yolov5s' is the YOLOv5 'small' model. For details on all available models please see the README. Custom models can also be loaded, including custom trained PyTorch models and their exported variants, i.e. ONNX, TensorRT, TensorFlow, OpenVINO YOLOv5 models.

import torch

# Model
model = torch.hub.load('ultralytics/yolov5', 'yolov5s')  # yolov5n - yolov5x6 official model
#                                            'custom', 'path/to/best.pt')  # custom model

# Images
im = 'https://ultralytics.com/images/zidane.jpg'  # or file, Path, URL, PIL, OpenCV, numpy, list

# Inference
results = model(im)

# Results
results.print()  # or .show(), .save(), .crop(), .pandas(), etc.
results.xyxy[0]  # im predictions (tensor)

results.pandas().xyxy[0]  # im predictions (pandas)
#      xmin    ymin    xmax   ymax  confidence  class    name
# 0  749.50   43.50  1148.0  704.5    0.874023      0  person
# 2  114.75  195.75  1095.0  708.0    0.624512      0  person
# 3  986.00  304.00  1028.0  420.0    0.286865     27     tie

results.pandas().xyxy[0].value_counts('name')  # class counts (pandas)
# person    2
# tie       1

See YOLOv5 PyTorch Hub Tutorial for details.

Good luck 🍀 and let us know if you have any other questions!

AI-P-K commented 2 years ago

it is still using torch.hub.load and the problem with torch.hub is that it's taking 2 seconds to load. If i put this in a live inference is going to increase the waiting time by 2 seconds only for loading the model. You keep giving same answer to many people that ask this question avoiding actually answering the question "Is it possible to use torch.load instead of torch.hub.load"?

glenn-jocher commented 2 years ago

Google Colab %timeit for a local YOLOv5s model is 200ms with torch hub load

AI-P-K commented 2 years ago

Ok so the following script: start = time.time() model = torch.hub.load('.', 'custom', path='runs/train/exp3/weights/best.pt', source='local') im1 = Image.open('test-1/test/images/img_0674_png.rf.c540c5e527814a57719ef314faf627f5.jpg') imgs = [im1] results = model(imgs) finish = time.time()

finish - start = 2.7129807472229004 seconds

This simple script on your colab take 0.002 s ? Because in my case it takes almost 3 seconds. P.S I am including the time it takes for the model to be loaded and do the inference. Not only the time of the inference itself in which case i get the same answer you get 0.002 s. If so please send me a link of that google colab or your snippet code if you are kind.

glenn-jocher commented 2 years ago

220 ms to load a local YOLOv5s model in Colab with PyTorch Hub

Screen Shot 2022-06-29 at 7 03 17 PM
AI-P-K commented 2 years ago

You are right, i get the same time in Colab. I can see it actually says it's using cache found 'Using cache found in /root/.cache/torch/hub/ultralytics_yolov5_master'. I wonder why on my PC is so slow.. i have a RTX3070, 64GB RAM, i9. Would you have any recommendations for me in this instance?

P.S I appreciate your answers

glenn-jocher commented 2 years ago

@AI-P-K just use the same commands I used to get the same result. Local systems should be a bit faster than Colab actually.

Don't use force_reload=True, this forces a reload every time.

AI-P-K commented 2 years ago

@glenn-jocher i found a solution to load a YOLOv5 custom trained model using torch.load() instead of torch.hub.load(). This is useful if there is a situation where you are calling the python script for each inference individually. In my case the images I infer on come from a list of links. Also I am using the CPU for inference. Solution as follows:

  1. Clone YOLOv5 project
  2. Open train.py and in the if statemen forn line 425 add torch.save(model, 'path/model_name.pt') if best_fitness == fi: torch.save(model, 'path/model_name.pt) torch.save(ckpt, best) @glenn-jocher let me know if you think i'm wrong in this process. p.s Appreciate your engagement
guenter-r commented 1 year ago

I just got routed to this post through my search and I think the key take away (from the original question) is that you have to use source="local" to use a model that is stored on your machine, like this:

torch.hub.load('./yolov5', 'custom', source='local', path='yolov5/runs/train/exp18/weights/best.pt', force_reload=True)

I added a few tweaks to the Detections class and to see how they worked out, I had to use the local model instead of the ultralytics GitHub.

Hope this helps. ( would consider this issue closed )

AI-P-K commented 1 year ago

It closed 100% :) but thank you for your interest.

glenn-jocher commented 12 months ago

@AI-P-K glad to hear that the issue is resolved! If you have any more questions or need further assistance in the future, feel free to ask. Have a great day!