Closed ghost closed 2 years ago
@cyberFoxi this warning is stemming from your OpenCV installation, which you may want to review, and is probably not related to YOLOv5.
Hmm OK. But I didn't have this problem a few weeks ago when the yolov5 version was even older.
I installed the requirements and then just ran python3 detect.py
.
I have the following pip3 installations. OpenCV is 4.5.1 as you can see.
Do you have any other idea why this could be?
nvidia@nvidia-desktop:~$ pip3 list
Package Version
----------------------------- -------------------
absl-py 0.12.0
apt-clone 0.2.1
apturl 0.5.2
asn1crypto 0.24.0
beautifulsoup4 4.6.0
blinker 1.4
Brlapi 0.6.6
cachetools 4.2.1
certifi 2018.1.18
chardet 3.0.4
click 6.7
colorama 0.3.7
cryptography 2.1.4
cupshelpers 1.0
cycler 0.10.0
dataclasses 0.8
decorator 4.1.2
defer 1.0.6
distro-info 0.18ubuntu0.18.04.1
feedparser 5.2.1
google-auth 1.29.0
google-auth-oauthlib 0.4.4
graphsurgeon 0.4.5
grpcio 1.37.0
html5lib 0.999999999
httplib2 0.9.2
idna 2.6
importlib-metadata 4.0.1
jetson-stats 3.1.0
Jetson.GPIO 2.0.16
keyring 10.6.0
keyrings.alt 3.0
kiwisolver 1.3.1
language-selector 0.1
launchpadlib 1.10.6
lazr.restfulclient 0.13.5
lazr.uri 1.0.3
louis 3.5.0
lxml 4.2.1
macaroonbakery 1.1.3
Mako 1.0.7
Markdown 3.3.4
MarkupSafe 1.0
matplotlib 3.3.4
numpy 1.19.4
oauth 1.0.1
oauthlib 3.1.0
onboard 1.4.1
opencv-python 4.5.1.48
PAM 0.4.2
pandas 1.1.5
Pillow 8.2.0
pip 21.0.1
protobuf 3.15.8
pyasn1 0.4.8
pyasn1-modules 0.2.8
pycairo 1.16.2
pycrypto 2.6.1
pycups 1.9.73
pygobject 3.26.1
PyICU 1.9.8
PyJWT 1.5.3
pymacaroons 0.13.0
PyNaCl 1.1.2
pyparsing 2.4.7
pyRFC3339 1.0
python-apt 1.6.5+ubuntu0.5
python-dateutil 2.8.1
python-debian 0.1.32
pytz 2018.3
pyxattr 0.6.0
pyxdg 0.25
PyYAML 5.4.1
requests 2.25.1
requests-oauthlib 1.3.0
requests-unixsocket 0.1.5
rsa 4.7.2
scipy 1.5.4
seaborn 0.11.1
SecretStorage 2.3.1
setuptools 56.0.0
simplejson 3.13.2
six 1.11.0
ssh-import-id 5.7
system-service 0.3
systemd-python 234
tensorboard 2.5.0
tensorboard-data-server 0.6.0
tensorboard-plugin-wit 1.8.0
tensorrt 7.1.3.0
torch 1.8.1
torchvision 0.9.1
tqdm 4.60.0
typing-extensions 3.7.4.3
ubuntu-drivers-common 0.0.0
uff 0.6.9
unity-scope-calculator 0.1
unity-scope-chromiumbookmarks 0.1
unity-scope-colourlovers 0.1
unity-scope-devhelp 0.1
unity-scope-firefoxbookmarks 0.1
unity-scope-manpages 0.1
unity-scope-openclipart 0.1
unity-scope-texdoc 0.1
unity-scope-tomboy 0.1
unity-scope-virtualbox 0.1
unity-scope-yelp 0.1
unity-scope-zotero 0.1
urllib3 1.22
urwid 2.0.1
wadllib 1.3.2
webencodings 0.5
Werkzeug 1.0.1
wheel 0.30.0
xkit 0.0.0
youtube-dl 2018.3.14
zipp 3.4.1
zope.interface 4.3.2
@cyberFoxi the opencv message seems pretty informative:
OpenBLAS Warning : Detect OpenMP Loop and this application may hang. Please rebuild the library with USE_OPENMP=1 option.
@glenn-jocher Yes, I have the same problem, it's very informative to me too. Unfortunately, I don't have any more information on what might solve this problem.
The system is running, but very, very slowly and this warning keeps popping up while it is running.
Whether I use pictures or the webcam, the OpenBLAS warning always comes.
Is it resolved?
I have the same problem , I installed and ran yolov5 two days ago and this problem appears :/
Could anyone solve it ?
@raghavddps2 @deema1999
Unfortunately not.
Still have the problem.
I have the same problem and no idea to deal with it
same issue on RPI 4 64 bit
Linux raspberrypi 5.10.17-v8+ #1414 SMP PREEMPT Fri Apr 30 13:23:25 BST 2021 aarch64 GNU/Linux
installed with python3 -m pip install -r requirements.txt
using yolo5s weights
@glenn-jocher rebuilding OpenBLAS didn't help...
@miwojc @fhrgithub @cyberFoxi @deema1999 @raghavddps2
I had the same issue. Building OpenBLAS from source with USE_OPENMP=1
didn't help for me.
I have few Jetson Nano and I guess i found a few ways to fix it:
jtop
(https://github.com/rbonghi/jetson_stats) output:Jetpack 4.4.1
Jetpack 4.5.1
Nvidia updated VPI (https://docs.nvidia.com/vpi/index.html) and i guess it cause the problem.
sudo apt-get install libopenblas-base libopenmpi-dev
wget https://nvidia.box.com/shared/static/wa34qwrwtk9njtyarwt5nvo6imenfy26.whl -O torch-1.7.0-cp36-cp36m-linux_aarch64.whl
./venv/bin/pip3 install torch-1.7.0-cp36-cp36m-linux_aarch64.whl
related issues: https://github.com/open-ce/pytorch-feedstock/issues/34 https://github.com/pytorch/pytorch/issues/52047
@lgg @miwojc @fhrgithub @cyberFoxi @deema1999 @raghavddps2 you may be interested in our YOLOv5 π EXPORT Competition with up to $10000 in prizes. Jetson Nano is one of the categories! See https://github.com/ultralytics/yolov5/discussions/3213#discussioncomment-817539 for details.
Is it resolved??? I had same problem and I'm confused with this terrible problem ???
@raghavddps2 @lgg @miwojc @fhrgithub @cyberFoxi @deema1999
@EminGuney did you try the steps that i suggested above?
@lgg @glenn-jocher https://docs.ultralytics.com/yolov5/tutorials/pytorch_hub_model_loading03 can you help me resolve it. I created virtual env.
@Ammad53 your code is out of date. To update:
git pull
from within your yolov5/
directory or git clone https://github.com/ultralytics/yolov5
againmodel = torch.hub.load('ultralytics/yolov5', 'yolov5s', force_reload=True)
sudo docker pull ultralytics/yolov5:latest
to update your image @lgg my details NVIDIA Jetson Nano (Developer Kit Version) L4T 32.5.1 [ JetPack 4.5.1 ] Ubuntu 18.04.5 LTS Kernel Version: 4.9.201-tegra CUDA 10.2.89 CUDA Architecture: 5.3 OpenCV version: 4.1.1 OpenCV Cuda: NO CUDNN: 8.0.0.180 TensorRT: 7.1.3.0 Vision Works: 1.6.0.501 VPI: ii libnvvpi1 1.0.15 arm64 NVIDIA Vision Programming Interface library Vulcan: 1.2.70
sane issues still on jetson pack 4.4.1 python jetsonInfo.py NVIDIA Jetson Nano (Developer Kit Version) L4T 32.4.4 [ JetPack 4.4.1 ] Ubuntu 18.04.5 LTS Kernel Version: 4.9.140-tegra CUDA 10.2.89 CUDA Architecture: 5.3 OpenCV version: 4.1.1 OpenCV Cuda: NO CUDNN: 8.0.0.180 TensorRT: 7.1.3.0 Vision Works: 1.6.0.501 VPI: 0.4.4 Vulcan: 1.2.70
Hi @lgg , thanks for sharing the results on how you fixed this. I am trying to run detect.py on a Jetson Xavier NX on jetpack 4.5.1 and running into this issue as well. I tried your solution above to install this pytorch wheel - torch-1.7.0-cp36-cp36m-linux_aarch64.whl , but received an error:
"ERROR: torch-1.7.0-cp36-cp36m-linux_aarch64.whl is not a supported wheel on this platform."
I'm a bit confused how this ran on your nano, but I am receiving the error on a Xavier NX, do you have any ideas? Thanks! Rob
@rob-pointcloud same platform I have ( Xavier Nx). But bleeding problem continue. And SOLVE DOESN'T AVAILABLE.!!!!!!
@Ammad53 your code is out of date. To update:
- Git β
git pull
from within youryolov5/
directory orgit clone https://github.com/ultralytics/yolov5
again- PyTorch Hub β Force-reload with
model = torch.hub.load('ultralytics/yolov5', 'yolov5s', force_reload=True)
- Notebooks β View updated notebooks
- Docker β
sudo docker pull ultralytics/yolov5:latest
to update your image
It doesn't a reply to our question of @rob-pointcloud, @lgg, @Ammad53, @miwojc.
@lgg I've installed jetpack 4.4.1 but logs show the same info (OpenBLAS). However, when I checked GPU info, it was indicated that VPI is just 0.4.4 without Nvidia Vision Programming Interface.
I also tried to downgrade PyTorch, but it caused extremely slow program execution.
Any suggestions? Thanks.
Hi all, I was able to resolve the errors I received installing the pytorch wheel in my earlier post. I think I was trying to install with Python3.8, when I install just with a standard pip3 install (after installing a couple of build dependencies) it works.
However, that installs torch into the default python 3.6, which I don't think helps, since yolov5 needs to run in python3.8. I need to work on getting torch 1.7 installed for python3.8, if there is an obvious way I am missing, someone can let me know. I tried installing torch from source but was getting some errors, I may need to revisit.
@lionelsnaw , you mentioned that you downgraded PyTorch, but it caused extremely slow execution. What version of PyTorch are you using?
@rob-pointcloud Hi. Thanks for suggestion, because I ran yolov5 in default Python version (3.6.9), I will try that one too.
Concerning PyTorch version, I tried latest one (1.8.1) -- it executed in CPU, later, I tried 1.8.0, installed from Nvidia Docs, in that case the execution was stuck at 'Fusing layers...', the same thing, however, happened when I downgraded to 1.7.0 (Both executed in GPU)
@rob-pointcloud @lionelsnaw good news π! Your original issue may now be partially addressed β in PR #3548. This PR reduces the YOLOv5 python version requirement to >= 3.6.2. To receive this update:
git pull
from within your yolov5/
directory or git clone https://github.com/ultralytics/yolov5
againmodel = torch.hub.load('ultralytics/yolov5', 'yolov5s', force_reload=True)
sudo docker pull ultralytics/yolov5:latest
to update your image Thank you for spotting this issue and informing us of the problem. Please let us know if this update resolves the issue for you, and feel free to inform us of any other issues you discover or feature requests that come to mind. Happy trainings with YOLOv5 π!
@rob-pointcloud @lionelsnaw good news π! Your original issue may now be partially addressed β in PR #3548. This PR reduces the YOLOv5 python version requirement to >= 3.6.2. To receive this update:
Thanks @glenn-jocher for the update! I think running this in Python3.6 is going to make it easier to get all the versions to line up, especially for me, since I am not an experienced python programmer and the dependency alignment is making my head spin =]
@lionelsnaw , I think I am now at the same point as you. I can get detect.py to run without error using torch 1.9 and torchvision 0.10, but it runs with the CPU, not GPU. I'm going to see if I can figure out how to get one of these later versions of torch/torchvision to compile with GPU, to see if the OpenBLAS fusing error is tied to the GPU mode execution.
OK, I think I have managed to get this working! 1) Start with a clean Jetpack 4.5.1 install (I am using a Jetson Xavier NX) 2) Pull the Nvidia pytorch docker - https://ngc.nvidia.com/catalog/containers/nvidia:l4t-pytorch l4t-pytorch:r32.5.0-pth1.6-py3 3) Update the yolov5 requirements.txt to comment out torch and torchvision, since they are installed with the docker build with Cuda 4) pip install -r requirements.txt 5) run detect.py, runs WITH CUDA
root@persius0:/yolov5# python3 detect.py Namespace(agnostic_nms=False, augment=False, classes=None, conf_thres=0.25, device='', exist_ok=False, half=False, hide_conf=False, hide_labels=False, imgsz=640, iou_thres=0.45, line_thickness=3, max_det=1000, name='exp', nosave=False, project='runs/detect', save_conf=False, save_crop=False, save_txt=False, source='data/images', update=False, view_img=False, weights='yolov5s.pt') YOLOv5 \U0001f680 v5.0-203-g2754ada torch 1.7.0 CUDA:0 (Xavier, 7773.5546875MB)
Fusing layers... Model Summary: 224 layers, 7266973 parameters, 0 gradients image 1/2 /yolov5/data/images/bus.jpg: 640x480 4 persons, 1 bus, 1 fire hydrant, Done. (0.124s) image 2/2 /yolov5/data/images/zidane.jpg: 384x640 2 persons, 2 ties, Done. (0.089s) Results saved to runs/detect/exp2 Done. (0.512s) root@persius0:/yolov5#
@rob-pointcloud @lionelsnaw if you think you have a good deployment pipeline to Jetson or Xavier you might want to consider a submission to the YOLOv5 EXPORT Competition:
We are super excited about our first-ever Ultralytics YOLOv5 π EXPORT Competition with $10,000 in cash prizes!
@miwojc I have the same issue on RPi 4 64bit OS (Raspbian, same kernel version). I am using Python 3.7.3 (torch==1.8.0; torchvision==0.9.1; you really helped me in another thread, thank you!). Do you solve the problem with OpenBLAS, please? I am really out of my ideas.
I didn't solve the OpenBLAS warning, but it works even with the warning. Did you notice there is something wrong with the results when it shows the warning?
@miwojc Yes, it works. I was only disappointed about the speed, because it computes every frame of stream long (about 3 seconds) and the terminal is spammed really quickly.
Yeah agree on spamming the terminal. Re speed to me seems to be acceptable. Not sure what speed would be without warning. But with small image size like 256px its pretty fast around 0.3 - 0.4 sec per image with yolov5s (small)
I set the stream to 640x480px from my camera (that's good for my purpose), I can also set it to 352x288px, but it's too small for me. Ideally I want 1280x720 with yolov5s6, but that's probably unrealistic with RPi. I'm considering buying a Nvidia Jetson Nano, do you think it will be better with it?
You could check the qengineering repo. They have a lot of RPI, Jetson Nano stuff. https://github.com/Qengineering/YoloV4-ncnn-Raspberry-Pi-4
Thank you a lot!!
π Hello, this issue has been automatically marked as stale because it has not had recent activity. Please note it will be closed if no further activity occurs.
Access additional YOLOv5 π resources:
Access additional Ultralytics β‘ resources:
Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!
Thank you for your contributions to YOLOv5 π and Vision AI β!
This post helps me solve this warning.
please use :
export OMP_NUM_THREADS=1
This simple line could boost your detection!
Then run your detect.py
Thanks @kuonumber That helps. But the recognition is not faster. I only have an output every 3 to 4 seconds. Only the warning that was spammed is not shown now. In the past it was faster. About 0.1 seconds.
Do you have any other idea?
This post helps me solve this warning. please use :
export OMP_NUM_THREADS=1
This simple line could boost your detection! Then run your detect.py
@cyberFoxi 3 to 4 seconds it takes on Raspberry Pi, so it means you are running it on CPU, not on GPU.
On NVIDIA Jetson Nano there are two ways to solve this problem alongside with running Yolov5 on GPU:
After install you can just download and run Yolov5 repository. With torch 1.9.0 and torchvision 0.10.0 I am able to process one frame in 0.16 s. Please give me feedback if you download an older versions and it goes faster than roughly 0.16 s.
@kuonumber Really? Are you running Raspberry Pi or NVIDIA Jetson Nano? If Jetson Nano, are you using GPU or CPU for Yolov5 predictions?
@frycaktadeas @cyberFoxi I use yolo5s on Pi 4b 8G (os: ubuntu21.04) , and get about 20% speedup (from 2.2s to 1.7s) for one object of an image. In my opinion, it is a huge progress while using this little machine =).
π Hello, this issue has been automatically marked as stale because it has not had recent activity. Please note it will be closed if no further activity occurs.
Access additional YOLOv5 π resources:
Access additional Ultralytics β‘ resources:
Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!
Thank you for your contributions to YOLOv5 π and Vision AI β!
This post helps me solve this warning. please use :
export OMP_NUM_THREADS=1
This simple line could boost your detection! Then run your detect.py
this work with me on rasparry pi
Same issue on RPI 4 64 bit Could anyone solve it?
This turns off only the openblas threading:
export OPENBLAS_NUM_THREADS=1
Found it here: https://github.com/xianyi/OpenBLAS/blob/develop/USAGE.md
@sujona hi there! π
Thanks for sharing your findings and potential solutions for resolving the OpenBLAS warning. We greatly appreciate your efforts. The YOLOv5 community benefits tremendously from collaborative problem-solving like this. If you have any other questions or face further issues, please feel free to let us know. We're here to assist you!
π Bug
I have a new problem with yolov5 running on the Nvidia Jetson nano.
So if I use "python3 detect.py --source 0" the webcam, I get the message "OpenBLAS Warning: Detect OpenMP Loop and this application may hang. Please rebuild the library with USE_OPENMP = 1 option". See Output.
The FPS of the output window are also very low.
I didn't have the problem about 6 weeks ago when I had a previous version of yolov5.
Is that a bug? Does anyone have an idea why this could be?
To Reproduce (REQUIRED)
Input:
Output:
Expected behavior
Environment
Nvidia Jetson Nano B01
Additional context