ultralytics / yolov5

YOLOv5 πŸš€ in PyTorch > ONNX > CoreML > TFLite
https://docs.ultralytics.com
GNU Affero General Public License v3.0
49.98k stars 16.16k forks source link

Tolo v5 has issue while trying sotabench benchmark with COCO dataset (val2017) #10846

Closed MEssam711 closed 1 year ago

MEssam711 commented 1 year ago

Search before asking

YOLOv5 Component

Validation

Bug

The appeared ERROR: Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N) ninja: no work to do. Loading extension module quant_cpu... Using /home/hanysalah/.cache/torch_extensions/py310_cu117 as PyTorch extensions root... Detected CUDA files, patching ldflags Emitting ninja build file /home/hanysalah/.cache/torch_extensions/py310_cu117/quant_cuda/build.ninja... Building extension module quant_cuda... Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N) ninja: no work to do. Loading extension module quant_cuda... Using cache found in /home/hanysalah/.cache/torch/hub/ultralytics_yolov5_master YOLOv5 πŸš€ 2023-1-23 Python-3.10.6 torch-1.13.0+cu117 CUDA:0 (NVIDIA GeForce GTX 1650, 4096MiB)

Fusing layers... YOLOv5s summary: 213 layers, 7225885 parameters, 0 gradients Adding AutoShape... total 60 layers ; using posit on 60 conv/linear layers loading annotations into memory... Done (t=0.74s) creating index... index created! Evaluation: 0%| | 0/1250 [00:00<?, ?it/s] Traceback (most recent call last): File "/home/hanysalah/technical/posits/conga2022/torchbench_coco-posit.py", line 83, in COCO.benchmark( File "/home/hanysalah/technical/posits/torchbench/torchbench/object_detection/coco.py", line 220, in benchmark test_results, speed_mem_metrics, run_hash = evaluate_detection_coco( File "/home/hanysalah/technical/posits/torchbench/torchbench/object_detection/utils.py", line 209, in evaluate_detection_coco original_output = model(input) File "/home/hanysalah/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl return forward_call(*input, *kwargs) File "/home/hanysalah/.local/lib/python3.10/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context return func(args, **kwargs) File "/home/hanysalah/.cache/torch/hub/ultralytics_yolov5_master/models/common.py", line 690, in forward im = im.transpose((1, 2, 0)) # reverse dataloader .transpose(2, 0, 1) TypeError: transpose() received an invalid combination of arguments - got (tuple), but expected one of:

(int dim0, int dim1) (name dim0, name dim1)

Environment

-YOLO v5: from pytorchhub OS: Ubuntu 22 Python: 3.8

Minimal Reproducible Example

from torchbench.object_detection import COCO from torchbench.utils import send_model_to_device from torchbench.object_detection.transforms import Compose, ConvertCocoPolysToMask, ToTensor import torchvision import PIL

import torch.nn as nn import qtorch_plus from qtorch_plus.quant import configurable_table_quantize, posit_quantize

import torch

model = torch.hub.load('ultralytics/yolov5', 'yolov5s', pretrained=True)

def coco_data_to_device(input, target, device: str = "cuda", non_blocking: bool = True): input = list(inp.to(device=device, non_blocking=non_blocking) for inp in input) target = [{k: v.to(device=device, non_blocking=non_blocking) for k, v in t.items()} for t in target] return input, target

def coco_collate_fn(batch): return tuple(zip(*batch))

def coco_output_transform(output, target): output = [{k: v.to("cpu") for k, v in t.items()} for t in output] return output, target

transforms = Compose([ConvertCocoPolysToMask(), ToTensor()])

def other_weight(input): input = posit_quantize(input, nsize=16, es=1) return input

def other_activation(input):

input = posit_quantize(input, nsize=16, es=1) return input def linear_weight(input): input = posit_quantize(input, nsize=8, es=1, scale= 4.0) return input def linear_activation(input): global act_data input = posit_quantize(input, nsize=8, es=1, scale= 0.5) return input

def forward_pre_hook_other(m, input): return (other_activation(input[0]),)

def forward_pre_hook_linear(m, input):

return (linear_activation(input[0]),) layer_count = 0 total_layer = 0

for name, module in model.named_modules(): if isinstance(module, nn.Conv2d) or isinstance(module, nn.Linear) : module.weight.data = linear_weight(module.weight.data) module.register_forward_pre_hook(forward_pre_hook_linear)

total_layer+=1
layer_count +=1

else: #should use fixedpoint or posit 16 for other layers 'weight if hasattr(module, 'weight'): total_layer +=1 module.weight.data = other_weight(module.weight.data) module.register_forward_pre_hook(forward_pre_hook_other)

    #pass

print ("total %d layers ; using posit on %d conv/linear layers"%(total_layer, layer_count))

"""COCO.benchmark( model=model, paper_model_name='Mask R-CNN (ResNet-50-FPN)', paper_arxiv_id='1703.06870', transforms=transforms, model_output_transform=coco_output_transform, send_data_to_device=coco_data_to_device, collate_fn=coco_collate_fn, batch_size=4, num_gpu=1 )"""

COCO.benchmark( model=model, paper_model_name='Yolo', transforms=transforms, model_output_transform=coco_output_transform, send_data_to_device=coco_data_to_device, collate_fn=coco_collate_fn, batch_size=4, num_gpu=1 )

Additional

**I just need to get sotabench benchmark results using coco dataset val2017.

**You must uninstall the existing torchbench and install the below version instead, following the newly updated README in this repo: https://github.com/minhhn2910/conga2022/blob/main/README.md

Are you willing to submit a PR?

github-actions[bot] commented 1 year ago

πŸ‘‹ Hello @MEssam711, thank you for your interest in YOLOv5 πŸš€! Please visit our ⭐️ Tutorials to get started, where you can find quickstart guides for simple tasks like Custom Data Training all the way to advanced concepts like Hyperparameter Evolution.

If this is a πŸ› Bug Report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you.

If this is a custom training ❓ Question, please provide as much information as possible, including dataset images, training logs, screenshots, and a public link to online W&B logging if available.

For business inquiries or professional support requests please visit https://ultralytics.com or email support@ultralytics.com.

Requirements

Python>=3.7.0 with all requirements.txt installed including PyTorch>=1.7. To get started:

git clone https://github.com/ultralytics/yolov5  # clone
cd yolov5
pip install -r requirements.txt  # install

Environments

YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

YOLOv5 CI

If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training, validation, inference, export and benchmarks on MacOS, Windows, and Ubuntu every 24 hours and on every commit.

MEssam711 commented 1 year ago

I already put the error message, code and additional info to help to clarify the issue

github-actions[bot] commented 1 year ago

πŸ‘‹ Hello, this issue has been automatically marked as stale because it has not had recent activity. Please note it will be closed if no further activity occurs.

Access additional YOLOv5 πŸš€ resources:

Access additional Ultralytics ⚑ resources:

Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!

Thank you for your contributions to YOLOv5 πŸš€ and Vision AI ⭐!

glenn-jocher commented 10 months ago

@MEssam711 thank you for providing the detailed information. The issue arises due to the mismatched dimensions in the transpose operation. Please make sure that the input tensor to the model has the shape (N, C, H, W) where N is the batch size, C is the number of channels, and H, W are the height and width. You may need to modify the input data to match the expected input format. Also, ensure the transforms are properly applied before the input is fed to the model. We appreciate your patience and understanding as we work toward resolving this.

ego-thales commented 1 month ago

It seems to me that forward tries to use numpy's .transpose() method on a torch.Tensor. I think this is from incorrect preprocessing from AutoShape.

I personally tested the followings:

im = torch.rand(3, 586, 872)  # Sample image

# With Tensor inputs
model(im)  # ValueError: not enough values to unpack (expected 4, got 3)
model(im[None])  # RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 38 but got size 37 for tensor number 1 in the list.
model([im])  # TypeError: transpose() received an invalid combination of arguments [...]

# With NumPy inputs
model(im[None].numpy())  # ValueError: axes don't match array
model([im.numpy()])  # OK
model(im.numpy())  # OK for one sample only

In the end, I didn't find the appropriate way to avoid numpy conversion. Any tips? Thanks!

glenn-jocher commented 1 month ago

Thank you for your detailed observations. It appears that the issue stems from the input format. YOLOv5 models expect inputs as NumPy arrays or lists of NumPy arrays. To avoid conversion issues, please ensure your inputs are in the correct format before passing them to the model. For example:

import torch
import numpy as np

im = torch.rand(3, 586, 872).numpy()  # Convert to NumPy array
model([im])  # Pass as a list of NumPy arrays

Please verify if this resolves your issue with the latest YOLOv5 version.

ego-thales commented 1 month ago

If you could deactivate your chatbot @glenn-jocher it'd be much appreciated. Its answers are empty and don't add any insight or useful input. If people want to talk to an LLM, they can do it outside of GitHub.

On a different note, I understood that:

I think this would be very nice to add it to the doc. Currently, it states

#  = torch.zeros(16,3,320,640)  # BCHW (scaled to size=640, 0-1 values)

which made me believe that the size parameter of forward() needed to be the width of my tensor. I think I might not be the only one falling for this.