Closed trra1988 closed 3 years ago
π Hello @trra1988, thank you for your interest in π YOLOv5! Please visit our βοΈ Tutorials to get started, where you can find quickstart guides for simple tasks like Custom Data Training all the way to advanced concepts like Hyperparameter Evolution.
If this is a π Bug Report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you.
If this is a custom training β Question, please provide as much information as possible, including dataset images, training logs, screenshots, and a public link to online W&B logging if available.
For business inquiries or professional support requests please visit https://www.ultralytics.com or email Glenn Jocher at glenn.jocher@ultralytics.com.
Python 3.8 or later with all requirements.txt dependencies installed, including torch>=1.7
. To install run:
$ pip install -r requirements.txt
YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):
If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training (train.py), testing (test.py), inference (detect.py) and export (export.py) on MacOS, Windows, and Ubuntu every 24 hours and on every commit.
Hello @trra1988, thank you for your interest in YOLOv5! Please visit our Tutorials to get started, where you can find quickstart guides for simple tasks like Custom Data Training all the way to advanced concepts like Hyperparameter Evolution.
If this is a Bug Report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you.
If this is a custom training Question, please provide as much information as possible, including dataset images, training logs, screenshots, and a public link to online W&B logging if available.
For business inquiries or professional support requests please visit https://www.ultralytics.com or email Glenn Jocher at glenn.jocher@ultralytics.com.
Requirements
Python 3.8 or later with all requirements.txt dependencies installed, including
torch>=1.7
. To install run:$ pip install -r requirements.txt
Environments
YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):
- Google Colab and Kaggle notebooks with free GPU:
- Google Cloud Deep Learning VM. See GCP Quickstart Guide
- Amazon Deep Learning AMI. See AWS Quickstart Guide
- Docker Image. See Docker Quickstart Guide
Status
If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training (train.py), testing (test.py), inference (detect.py) and export (export.py) on MacOS, Windows, and Ubuntu every 24 hours and on every commit.
Duplicate of #
Hello @trra1988, thank you for your interest in YOLOv5! Please visit our Tutorials to get started, where you can find quickstart guides for simple tasks like Custom Data Training all the way to advanced concepts like Hyperparameter Evolution.
If this is a Bug Report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you.
If this is a custom training Question, please provide as much information as possible, including dataset images, training logs, screenshots, and a public link to online W&B logging if available.
For business inquiries or professional support requests please visit https://www.ultralytics.com or email Glenn Jocher at glenn.jocher@ultralytics.com.
Requirements
Python 3.8 or later with all requirements.txt dependencies installed, including
torch>=1.7
. To install run:$ pip install -r requirements.txt
Environments
YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):
- Google Colab and Kaggle notebooks with free GPU:
- Google Cloud Deep Learning VM. See GCP Quickstart Guide
- Amazon Deep Learning AMI. See AWS Quickstart Guide
- Docker Image. See Docker Quickstart Guide
Status
If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training (train.py), testing (test.py), inference (detect.py) and export (export.py) on MacOS, Windows, and Ubuntu every 24 hours and on every commit.
hello, Thanks for reply and its help me, but I still can not detect the object, this is my train result and the detect picture results.txt thanks for the help.
this is the result of detect
I had the same problem
@529035872 @trra1988 hi thanks for the bug notice! There seems to be an issue with Windows and/or Conda environments that causes detect.py to not detect anything sometimes. I'm not sure of the cause or the solution, so all I can do is point you to one of our verified environments below, where everything will work correctly:
YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):
If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are passing. These tests evaluate proper operation of basic YOLOv5 functionality, including training (train.py), testing (test.py), inference (detect.py) and export (export.py) on MacOS, Windows, and Ubuntu.
@glenn-jocher thanks for your reply, I've solve this problem. In yolo.py, export = False # onnx export if export=True, no objects will be detected
@529035872 yolo.py has no export
variable in the entire file. Your code is out of date. Update your code before anything else.
@glenn-jocher ok, thanks.
@glenn-jocher thanks for help
@529035872 hello, I check my yolo.py, there do not have this line(export = False # onnx export), this cause of you change your own code right?
If you use Windows and there is no detection result at all when using detect.py, then you may meet an environmental problem. You should downgrade your CUDA to 10.2 and reinstall pytorch1.8.1+cu102. I tested CUDA11.1, 11.2, and 11.3, with PyTorch 1.8.1, none of these worked, only CUDA10.2 worked.
@wudashuo ok, thanks
@wudashuo hello, thanks for the reply last time, I change my cuda to the vision 11.0 and the code is runing on the ubuntu, but I still can not dectect the object, have you run on ubuntu before? thanks
@wudashuo hello, thanks for the reply last time, I change my cuda to the vision 11.0 and the code is runing on the ubuntu, but I still can not dectect the object, have you run on ubuntu before? thanks
I met several environmental problems on Windows, but I never met a problem on Ubuntu, no matter CUDA10.2, CUDA11.1, or CUDA11.2. Have you changed the codes of plotting?
@wudashuo hello, thanks for the reply last time, I change my cuda to the vision 11.0 and the code is runing on the ubuntu, but I still can not dectect the object, have you run on ubuntu before? thanks
I met several environmental problems on Windows, but I never met a problem on Ubuntu, no matter CUDA10.2, CUDA11.1, or CUDA11.2. Have you changed the codes of plotting?
No ,I did not, I just detect with pretrain model
@wudashuo The problem is solved, I change my system to ubuntu18.04 and change cuda to 10.2, The problem is cause by vision of cuda, Thanks
@wudashuo @trra1988 another user found that FP32 inference worked for them but not FP16 inference (the default in detect.py).
Can you see if the problem also happens on your system with YOLOv5 PyTorch Hub models? The Hub models are loaded as FP32 and use a different inference pathway (with AMP) which does not convert them to FP16. Try this line:
python hubconf.py
You should see
Fusing layers...
Model Summary: 224 layers, 7266973 parameters, 0 gradients
Adding AutoShape...
YOLOv5 π v5.0-112-g9f3a388 torch 1.8.1 CPU
image 1/5: 720x1280 2 persons, 2 ties
image 2/5: 720x1280 2 persons, 2 ties
image 3/5: 1080x810 4 persons, 1 bus
image 4/5: 1080x810 4 persons, 1 bus
image 5/5: 320x640
Speed: 100.4ms pre-process, 249.1ms inference, 1.7ms NMS per image at shape (5, 3, 640, 640)
Saved zidane.jpg, zidane.jpg, image2.jpg, bus.jpg, image4.jpg to runs/hub/exp5
π Hello, this issue has been automatically marked as stale because it has not had recent activity. Please note it will be closed if no further activity occurs.
Access additional YOLOv5 π resources:
Access additional Ultralytics β‘ resources:
Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!
Thank you for your contributions to YOLOv5 π and Vision AI β!
βQuestion
Additional context
hello when I run detect.py the code didn't show any error and the result save in the file, but the picture in file don't have any bbox and class in picture, the run result is below, thanks
(base) lin@lin:~/yolov5-master$ python detect.py --source data/images --weights yolov5s.pt --conf 0.25 Namespace(agnostic_nms=False, augment=False, classes=None, conf_thres=0.25, device='', exist_ok=False, hide_conf=False, hide_labels=False, img_size=640, iou_thres=0.45, line_thickness=3, name='exp', nosave=False, project='runs/detect', save_conf=False, save_crop=False, save_txt=False, source='data/images', update=False, view_img=False, weights=['yolov5s.pt']) YOLOv5 π 2021-4-27 torch 1.8.1+cu111 CUDA:0 (GeForce GTX 1660 Ti, 5944.625MB)
Fusing layers... Model Summary: 224 layers, 7266973 parameters, 0 gradients, 17.0 GFLOPS image 1/2 /home/lin/yolov5-master/data/images/bus.jpg: 640x480 Done. (0.045s) image 2/2 /home/lin/yolov5-master/data/images/zidane.jpg: 384x640 Done. (0.041s) Results saved to runs/detect/exp3 Done. (0.138s)