ultralytics / yolov5

YOLOv5 πŸš€ in PyTorch > ONNX > CoreML > TFLite
https://docs.ultralytics.com
GNU Affero General Public License v3.0
51.02k stars 16.41k forks source link

Load YOLOv5 from PyTorch Hub ⭐ #36

Open glenn-jocher opened 4 years ago

glenn-jocher commented 4 years ago

πŸ“š This guide explains how to load YOLOv5 πŸš€ from PyTorch Hub https://pytorch.org/hub/ultralytics_yolov5. See YOLOv5 Docs for additional details. UPDATED 26 March 2023.

Before You Start

Install requirements.txt in a Python>=3.7.0 environment, including PyTorch>=1.7. Models and datasets download automatically from the latest YOLOv5 release.

pip install -r https://raw.githubusercontent.com/ultralytics/yolov5/master/requirements.txt

πŸ’‘ ProTip: Cloning https://github.com/ultralytics/yolov5 is not required πŸ˜ƒ

Load YOLOv5 with PyTorch Hub

Simple Example

This example loads a pretrained YOLOv5s model from PyTorch Hub as model and passes an image for inference. 'yolov5s' is the lightest and fastest YOLOv5 model. For details on all available models please see the README.

import torch

# Model
model = torch.hub.load('ultralytics/yolov5', 'yolov5s')

# Image
im = 'https://ultralytics.com/images/zidane.jpg'

# Inference
results = model(im)

results.pandas().xyxy[0]
#      xmin    ymin    xmax   ymax  confidence  class    name
# 0  749.50   43.50  1148.0  704.5    0.874023      0  person
# 1  433.50  433.50   517.5  714.5    0.687988     27     tie
# 2  114.75  195.75  1095.0  708.0    0.624512      0  person
# 3  986.00  304.00  1028.0  420.0    0.286865     27     tie

Detailed Example

This example shows batched inference with PIL and OpenCV image sources. results can be printed to console, saved to runs/hub, showed to screen on supported environments, and returned as tensors or pandas dataframes.

import cv2
import torch
from PIL import Image

# Model
model = torch.hub.load('ultralytics/yolov5', 'yolov5s')

# Images
for f in 'zidane.jpg', 'bus.jpg':
    torch.hub.download_url_to_file('https://ultralytics.com/images/' + f, f)  # download 2 images
im1 = Image.open('zidane.jpg')  # PIL image
im2 = cv2.imread('bus.jpg')[..., ::-1]  # OpenCV image (BGR to RGB)

# Inference
results = model([im1, im2], size=640) # batch of images

# Results
results.print()  
results.save()  # or .show()

results.xyxy[0]  # im1 predictions (tensor)
results.pandas().xyxy[0]  # im1 predictions (pandas)
#      xmin    ymin    xmax   ymax  confidence  class    name
# 0  749.50   43.50  1148.0  704.5    0.874023      0  person
# 1  433.50  433.50   517.5  714.5    0.687988     27     tie
# 2  114.75  195.75  1095.0  708.0    0.624512      0  person
# 3  986.00  304.00  1028.0  420.0    0.286865     27     tie

For all inference options see YOLOv5 AutoShape() forward method: https://github.com/ultralytics/yolov5/blob/30e4c4f09297b67afedf8b2bcd851833ddc9dead/models/common.py#L243-L252

Inference Settings

YOLOv5 models contain various inference attributes such as confidence threshold, IoU threshold, etc. which can be set by:

model.conf = 0.25  # NMS confidence threshold
      iou = 0.45  # NMS IoU threshold
      agnostic = False  # NMS class-agnostic
      multi_label = False  # NMS multiple labels per box
      classes = None  # (optional list) filter by class, i.e. = [0, 15, 16] for COCO persons, cats and dogs
      max_det = 1000  # maximum number of detections per image
      amp = False  # Automatic Mixed Precision (AMP) inference

results = model(im, size=320)  # custom inference size

Device

Models can be transferred to any device after creation:

model.cpu()  # CPU
model.cuda()  # GPU
model.to(device)  # i.e. device=torch.device(0)

Models can also be created directly on any device:

model = torch.hub.load('ultralytics/yolov5', 'yolov5s', device='cpu')  # load on CPU

πŸ’‘ ProTip: Input images are automatically transferred to the correct model device before inference.

Silence Outputs

Models can be loaded silently with _verbose=False:

model = torch.hub.load('ultralytics/yolov5', 'yolov5s', _verbose=False)  # load silently

Input Channels

To load a pretrained YOLOv5s model with 4 input channels rather than the default 3:

model = torch.hub.load('ultralytics/yolov5', 'yolov5s', channels=4)

In this case the model will be composed of pretrained weights except for the very first input layer, which is no longer the same shape as the pretrained input layer. The input layer will remain initialized by random weights.

Number of Classes

To load a pretrained YOLOv5s model with 10 output classes rather than the default 80:

model = torch.hub.load('ultralytics/yolov5', 'yolov5s', classes=10)

In this case the model will be composed of pretrained weights except for the output layers, which are no longer the same shape as the pretrained output layers. The output layers will remain initialized by random weights.

Force Reload

If you run into problems with the above steps, setting force_reload=True may help by discarding the existing cache and force a fresh download of the latest YOLOv5 version from PyTorch Hub.

model = torch.hub.load('ultralytics/yolov5', 'yolov5s', force_reload=True)  # force reload

Screenshot Inference

To run inference on your desktop screen:

import torch
from PIL import ImageGrab

# Model
model = torch.hub.load('ultralytics/yolov5', 'yolov5s')

# Image
im = ImageGrab.grab()  # take a screenshot

# Inference
results = model(im)

Multi-GPU Inference

YOLOv5 models can be be loaded to multiple GPUs in parallel with threaded inference:

import torch
import threading

def run(model, im):
  results = model(im)
  results.save()

# Models
model0 = torch.hub.load('ultralytics/yolov5', 'yolov5s', device=0)
model1 = torch.hub.load('ultralytics/yolov5', 'yolov5s', device=1)

# Inference
threading.Thread(target=run, args=[model0, 'https://ultralytics.com/images/zidane.jpg'], daemon=True).start()
threading.Thread(target=run, args=[model1, 'https://ultralytics.com/images/bus.jpg'], daemon=True).start()

Training

To load a YOLOv5 model for training rather than inference, set autoshape=False. To load a model with randomly initialized weights (to train from scratch) use pretrained=False. You must provide your own training script in this case. Alternatively see our YOLOv5 Train Custom Data Tutorial for model training.

model = torch.hub.load('ultralytics/yolov5', 'yolov5s', autoshape=False)  # load pretrained
model = torch.hub.load('ultralytics/yolov5', 'yolov5s', autoshape=False, pretrained=False)  # load scratch

Base64 Results

For use with API services. See https://github.com/ultralytics/yolov5/pull/2291 and Flask REST API example for details.

results = model(im)  # inference

results.ims # array of original images (as np array) passed to model for inference
results.render()  # updates results.ims with boxes and labels
for im in results.ims:
    buffered = BytesIO()
    im_base64 = Image.fromarray(im)
    im_base64.save(buffered, format="JPEG")
    print(base64.b64encode(buffered.getvalue()).decode('utf-8'))  # base64 encoded image with results

Cropped Results

Results can be returned and saved as detection crops:

results = model(im)  # inference
crops = results.crop(save=True)  # cropped detections dictionary

Pandas Results

Results can be returned as Pandas DataFrames:

results = model(im)  # inference
results.pandas().xyxy[0]  # Pandas DataFrame
Pandas Output (click to expand) ```python print(results.pandas().xyxy[0]) # xmin ymin xmax ymax confidence class name # 0 749.50 43.50 1148.0 704.5 0.874023 0 person # 1 433.50 433.50 517.5 714.5 0.687988 27 tie # 2 114.75 195.75 1095.0 708.0 0.624512 0 person # 3 986.00 304.00 1028.0 420.0 0.286865 27 tie ```

Sorted Results

Results can be sorted by column, i.e. to sort license plate digit detection left-to-right (x-axis):

results = model(im)  # inference
results.pandas().xyxy[0].sort_values('xmin')  # sorted left-right

Box-Cropped Results

Results can be returned and saved as detection crops:

results = model(im)  # inference
crops = results.crop(save=True)  # cropped detections dictionary

JSON Results

Results can be returned in JSON format once converted to .pandas() dataframes using the .to_json() method. The JSON format can be modified using the orient argument. See pandas .to_json() documentation for details.

results = model(ims)  # inference
results.pandas().xyxy[0].to_json(orient="records")  # JSON img1 predictions
JSON Output (click to expand) ```json [ {"xmin":749.5,"ymin":43.5,"xmax":1148.0,"ymax":704.5,"confidence":0.8740234375,"class":0,"name":"person"}, {"xmin":433.5,"ymin":433.5,"xmax":517.5,"ymax":714.5,"confidence":0.6879882812,"class":27,"name":"tie"}, {"xmin":115.25,"ymin":195.75,"xmax":1096.0,"ymax":708.0,"confidence":0.6254882812,"class":0,"name":"person"}, {"xmin":986.0,"ymin":304.0,"xmax":1028.0,"ymax":420.0,"confidence":0.2873535156,"class":27,"name":"tie"} ] ```

Custom Models

This example loads a custom 20-class VOC-trained YOLOv5s model 'best.pt' with PyTorch Hub.

model = torch.hub.load('ultralytics/yolov5', 'custom', path='path/to/best.pt')  # local model
model = torch.hub.load('path/to/yolov5', 'custom', path='path/to/best.pt', source='local')  # local repo

TensorRT, ONNX and OpenVINO Models

PyTorch Hub supports inference on most YOLOv5 export formats, including custom trained models. See TFLite, ONNX, CoreML, TensorRT Export tutorial for details on exporting models.

πŸ’‘ ProTip: TensorRT may be up to 2-5X faster than PyTorch on GPU benchmarks πŸ’‘ ProTip: ONNX and OpenVINO may be up to 2-3X faster than PyTorch on CPU benchmarks

model = torch.hub.load('ultralytics/yolov5', 'custom', path='yolov5s.pt')  # PyTorch
                                                            'yolov5s.torchscript')  # TorchScript
                                                            'yolov5s.onnx')  # ONNX
                                                            'yolov5s_openvino_model/')  # OpenVINO
                                                            'yolov5s.engine')  # TensorRT
                                                            'yolov5s.mlmodel')  # CoreML (macOS-only)
                                                            'yolov5s.tflite')  # TFLite
                                                            'yolov5s_paddle_model/')  # PaddlePaddle

Environments

YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

YOLOv5 CI

If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training, validation, inference, export and benchmarks on MacOS, Windows, and Ubuntu every 24 hours and on every commit.

almog-gueta commented 3 years ago

@glenn-jocher Thank you for your fast reply I have seen this tutorial
However, I want to use the YOLO model as one layer in my model instead of using the ready train.py, how can I do so? Thanks

drorlederman commented 3 years ago

Hi, I would like to use the Torch Hub Yolov5 model loading, without needing to download the master everyrun (say I want to create a docker that automatically installs the environment on a specific trigger). Is there such an option, e.g., to download the necessary files in advance? Thanks!

glenn-jocher commented 3 years ago

@drorlederman torch hub models never update unless you pass force_reload=True

drorlederman commented 3 years ago

@drorlederman torch hub models never update unless you pass force_reload=True

I'm getting this message everytime I'm trying to run inference: Downloading: β€œhttps://github.com/ultralytics/yolov5/archive/master.zip” to /root/.cache/torch/hub/master.zip

Is there any way to prevent this? I'm using a fine-tuned model (a pre-trained model that was adapted using my own dataset).

Thanks

Hexer611 commented 3 years ago

model = torch.hub.load('yolo-v5', 'custom', path=model_path, source='local')

@drorlederman I've downloaded the repository and using this to load a model from my storage. It doesn't say that when I use it this way. I hope this helps.

drorlederman commented 3 years ago

model = torch.hub.load('yolo-v5', 'custom', path=model_path, source='local')

@drorlederman I've downloaded the repository and using this to load a model from my storage. It doesn't say that when I use it this way. I hope this helps.

Thanks, but I'm trying to run the model at production level, without cloning the repository.

spectrum151 commented 3 years ago

Hi, detect.py takes many different parameters as input, like line_thickness and hide_conf, but I can't find information on how to use these parameters in pytorch.

glenn-jocher commented 3 years ago

@spectrum151 PyTorch Hub model use AutoShape wrappers with the following attributes: https://github.com/ultralytics/yolov5/blob/b83e1a4adcf77ccafa72b22ade6cb3898ccb0e05/models/common.py#L227-L252

If you'd like to add additional attributes from detect.py please see https://github.com/ultralytics/yolov5#contribute for submitting a PR, thanks!

spectrum151 commented 3 years ago

@glenn-jocher, I found a working method for myself, but how to pass the line_thickness value through the torch I have not found. If change its line https://github.com/spectrum151/yolov5/blob/b83e1a4adcf77ccafa72b22ade6cb3898ccb0e05/models/common.py#L328-L329 to plot_one_box(box, im, label=label, color=colors(cls), line_thickness=1) its working default image now image

glenn-jocher commented 3 years ago

@spectrum151 you could pass the linewidth argument to the plotting function here, but yes your example seems fine. https://github.com/ultralytics/yolov5/blob/b83e1a4adcf77ccafa72b22ade6cb3898ccb0e05/models/common.py#L328-L330

spectrum151 commented 3 years ago

@glenn-jocher, Another question, as we may have noticed the purpose of the neural network is to determine license plates. The neural network detects well, but now the question is how to combine the output as license plates. Each letter and number is a separate class and I need to output in the format a123bc45. Any tips or similar projects to look at?

glenn-jocher commented 3 years ago

@spectrum151 yes it seems to be working well. I suppose you'd want additional logic to group numbers together to individual license plates and then to sort each group left to right.

spectrum151 commented 3 years ago

@glenn-jocher, am I correct in assuming that the output data in pytorch is implemented in pandas and json formats?

almog-gueta commented 3 years ago

Hello, I am trying to train the YOLO model from scratch (not pretrained) using my own train.py code, and want to use the loss function existed in this GitHub (class ComputeLoss in loss.py). However, I am trying to understand what the input β€˜targets’ to this function should be.

What I mean to do is:

model = torch.hub.load('ultralytics/yolov5', 'yolov5s', autoshape=False, pretrained=False)  # load scratch
model.nc = 2  # attach number of classes to model
model.names = ['proper mask', 'no mask'] # classes names

for epoch in tqdm(range(train_params.num_epochs)):
    model.train()
    for i, (images, bboxs, labels) in enumerate(train_loader):
        preds = YOLO_model(images)
        loss = compute_loss(preds, targets)
        loss.backward()

Thank you!

glenn-jocher commented 3 years ago

@spectrum151 yes this tutorial (above) explains how to output detections in pandas and JSON formats.

@almog-gueta for training YOLOv5 models see Train Custom Data tutorial:

YOLOv5 Tutorials

spectrum151 commented 3 years ago

@glenn-jocher, I fully copied the code that is in the example and it does not output anything image same with json format

glenn-jocher commented 3 years ago

@spectrum151 there's a minimum level of python required to use YOLOv5. The values you are showing are defined in your python workspace. If you want to visualize variables in python you can use the print() function.

spectrum151 commented 3 years ago

@glenn-jocher Sorry, I thought he was saving to a file:D thanks

almog-gueta commented 3 years ago

@glenn-jocher I have seen this tutorial. I am trying to train the model with my own train.py code since my evaluation metrics is not mAP, it is the average between the accuracy and the IoU of the model.

jerofad commented 3 years ago

Hello,

I am making use of this approach but instead of images, I am passing a batch of tensors. Now I am confused as to how the results looks like. any help?

glenn-jocher commented 3 years ago

@jerofad this PyTorch Hub method is meant for passing images directly as filenames, PIL images, etc. rather than torch tensors.

jerofad commented 3 years ago

@glenn-jocher oh okay thanks.

OrjwanZaafarani commented 3 years ago

Hi, thanks for the great effort! What does model.conf represent? is it the abjectness score?

glenn-jocher commented 3 years ago

@OrjwanZaafarani model.conf is the confidence threshold when running inference.

tabarkarajab commented 3 years ago

hi, I'm working on gloves and non-glove detection in jetson nano (Not testest yet but tested on Server PC), I have trained my model with 300+ images with yolov5s but still what I'm getting is poor results in image classification. How much I needed for this one any idea? I'm using a download dummy glove as shown in the image and normal wool gloves as glove for classification between two, My training and test data is like this:

train_batch0 test_batch1_pred

glenn-jocher commented 3 years ago

@tabarkarajab see Tips for Best Training Results in our Tutorials section:

YOLOv5 Tutorials

tabarkarajab commented 3 years ago

@glenn-jocher I tried these but didn't find any solution. can you please help?

DatDoc commented 3 years ago

Hi @glenn-jocher, I want to ensemble multiple models in PyTorch Hub. Could you show me the way to do it? Thank you.

glenn-jocher commented 3 years ago

@DatDoc model ensembling is only supported via detect.py. See Model Ensembling tutorial to get started.

YOLOv5 Tutorials

rsdel2007 commented 3 years ago

Hi, is there a way to use pytorch hub on a video?

glenn-jocher commented 3 years ago

@rsdel2007 yes, but you have to create your own video dataloader, i.e. a for loop passing one batch of frames at a time to the model.

kunalb99 commented 3 years ago

Hi @glenn-jocher, in this, model = torch.hub.load('path/to/yolov5', 'custom', path='path/to/best.pt', source='local') # local repo how can we save bounding box coordinates directly like in CLI we can simply do it by adding --save-txt at the end. model = torch.hub.load('path/to/yolov5', 'custom', path='path/to/best.pt', source='local') # local repo

asim-pointivo commented 3 years ago

Hi @glenn-jocher . At the very start it says we need python>=3.8? I am suspecting this might be a typo since the yolov5 implementation requires python>=3.6 and pytorch should also work with 3.6 as well? Please let me know if I am missing something?

glenn-jocher commented 3 years ago

@asim-pointivo python requirements are >= 3.6. We will update the tutorials and docs for this change shortly, thanks for letting us know!

glenn-jocher commented 3 years ago

@kunalb99 predictions are available after inference:

results = model(imgs)
print(results.xywhn)

If you'd like to submit a PR with a new method like results.save_txt() the place to do this is here: https://github.com/ultralytics/yolov5/blob/61047a2b4fb318a2cf86475c0099ead7832e45cf/models/common.py#L334-L353

tabarkarajab commented 3 years ago

Hi @glenn-jocher, I need to ask you another question that I have trained a model using this PyTorch custom dataset, initially, I used the yolov5s.pt file for the weights, but now I have trained a model, need to increase the dataset, so do I need to retrain on yolov5s.pt file or I used mine best.pt file?

glenn-jocher commented 3 years ago

@tabarkarajab your post is off topic for this issue thread.

tabarkarajab commented 3 years ago

@glenn-jocher should I create a new issue for this?

yufanxin commented 3 years ago

Hello, I ran detect.py directly in the tag5.0 version, but the picture under the runs folder is the original picture without any prediction information. Can you help me?

glenn-jocher commented 3 years ago

@yufanxin if no detections are appearing on the default images with python detect.py then there may be a problem on your system between your CUDA drivers and torch install. This happens sometimes on Windows and on Conda environments.

I would recommend you try one of our verified environments below where everything should work correctly:

Environments

YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

CI CPU testing

If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are passing. These tests evaluate proper operation of basic YOLOv5 functionality, including training (train.py), testing (test.py), inference (detect.py) and export (export.py) on MacOS, Windows, and Ubuntu.

yufanxin commented 3 years ago

@ glenn-jocher Thank you for your reply and suggestions, and best wishes for you!

zzy0222 commented 3 years ago

I tried the sample just now, but it doesn't work. I copied this code to test.py image but when I run it, what I got is here. image So, could anybody tell me why? I remember it could run correctly several days ago. Appreciate your response, thanks!

glenn-jocher commented 3 years ago

@zzy0222 it appears you may have environment problems. Please ensure you meet all dependency requirements if you are attempting to run YOLOv5 locally. If in doubt, create a new virtual Python 3.8 environment, clone the latest repo (code changes daily), and pip install -r requirements.txt again. We also highly recommend using one of our verified environments below.

Requirements

Python>=3.6.0 with all requirements.txt installed including PyTorch>=1.7. To get started:

$ git clone https://github.com/ultralytics/yolov5
$ cd yolov5
$ pip install -r requirements.txt

Environments

YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

CI CPU testing

If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training (train.py), validation (val.py), inference (detect.py) and export (export.py) on MacOS, Windows, and Ubuntu every 24 hours and on every commit.

zzy0222 commented 3 years ago

@zzy0222 it appears you may have environment problems. Please ensure you meet all dependency requirements if you are attempting to run YOLOv5 locally. If in doubt, create a new virtual Python 3.8 environment, clone the latest repo (code changes daily), and pip install -r requirements.txt again. We also highly recommend using one of our verified environments below.

Requirements

Python>=3.6.0 with all requirements.txt installed including PyTorch>=1.7. To get started:

$ git clone https://github.com/ultralytics/yolov5
$ cd yolov5
$ pip install -r requirements.txt

Environments

YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

CI CPU testing

If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training (train.py), validation (val.py), inference (detect.py) and export (export.py) on MacOS, Windows, and Ubuntu every 24 hours and on every commit.

The problem is fixed after cloning the latest repo. I guess my original environment lacks some necessary dependencies. Thank you very much!

Hassamarshad commented 3 years ago

how i can test it on a video using torch.hub.load?

glenn-jocher commented 3 years ago

@Hassamarshad you embed a YOLOv5 model into a video reader loop. The details are you up to you but you can start here: https://docs.opencv.org/4.5.2/dd/d43/tutorial_py_video_display.html

Hassamarshad commented 3 years ago

@glenn-jocher i am using this code but it does not working. i want to show the video at runtime with bounding boxes if the object is detected

import cv2 import numpy as np import torch from PIL import Image

model = torch.hub.load('C:/Users/Hassam/.cache/torch/hub/ultralytics_yolov5_master', 'custom',force_reload=True, path = 'best.pt', source='local')

PIL image

cap = cv2.VideoCapture(r'E:\HMBD1\hmdb51_org\smoke\smoke\American_History_X_smoke_u_nm_np1_fr_med_43.avi') # OpenCV image (BGR to RGB)

batch of images

while cap.isOpened(): ret, frame = cap.read()

results = model(frame, size=640)
if cv2.waitKey(10) & 0xFF == ord('q'):
    break

cap.release() cv2.destroyAllWindows()

results.save()

hoangnkust commented 3 years ago

I have the problem of not being able to detect the object when I use Inference with YOLOv5 and PyTorch Hub. while using detect.py still works normally . Can you help me ?? Capture 12

glenn-jocher commented 3 years ago

@hoangnkust it appears you may have environment problems. Please ensure you meet all dependency requirements if you are attempting to run YOLOv5 locally. If in doubt, create a new virtual Python 3.8 environment, clone the latest repo (code changes daily), and pip install -r requirements.txt again. We also highly recommend using one of our verified environments below.

Requirements

Python>=3.6.0 with all requirements.txt installed including PyTorch>=1.7. To get started:

$ git clone https://github.com/ultralytics/yolov5
$ cd yolov5
$ pip install -r requirements.txt

Environments

YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

CI CPU testing

If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training (train.py), validation (val.py), inference (detect.py) and export (export.py) on MacOS, Windows, and Ubuntu every 24 hours and on every commit.

hoangnkust commented 3 years ago

@hoangnkust it appears you may have environment problems. Please ensure you meet all dependency requirements if you are attempting to run YOLOv5 locally. If in doubt, create a new virtual Python 3.8 environment, clone the latest repo (code changes daily), and pip install -r requirements.txt again. We also highly recommend using one of our verified environments below.

Requirements

Python>=3.6.0 with all requirements.txt installed including PyTorch>=1.7. To get started:

$ git clone https://github.com/ultralytics/yolov5
$ cd yolov5
$ pip install -r requirements.txt

Environments

YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

CI CPU testing

If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training (train.py), validation (val.py), inference (detect.py) and export (export.py) on MacOS, Windows, and Ubuntu every 24 hours and on every commit.

thank sir, i changed CUDA version. now it's working normally