Open remz1337 opened 11 months ago
Hello :wave:
Thank you for taking the time to open this issue with recognize. I know it's frustrating when software causes problems. You have made the right choice to come here and open an issue to make sure your problem gets looked at and if possible solved. I try to answer all issues and if possible fix all bugs here, but it sometimes takes a while until I get to it. Until then, please be patient. Note also that GitHub is a place where people meet to make software better together. Nobody here is under any obligation to help you, solve your problems or deliver on any expectations or demands you may have, but if enough people come together we can collaborate to make this software better. For everyone. Thus, if you can, you could also look at other issues to see whether you can help other people with your knowledge and experience. If you have coding experience it would also be awesome if you could step up to dive into the code and try to fix the odd bug yourself. Everyone will be thankful for extra helping hands! One last word: If you feel, at any point, like you need to vent, this is not the place for it; you can go to the forum, to twitter or somewhere else. But this is a technical issue tracker, so please make sure to focus on the tech and keep your opinions to yourself. (Also see our Code of Conduct. Really.)
I look forward to working with you on this issue Cheers :blue_heart:
I have the same issue. I think I followed all the instructions regarding drivers and CUDA, DNN requirements.
Are you using CUDA 12 or CUDA 11? I believe we currently only support CUDA 11
Indeed CUDA 12. The app is still working though, it's just that warning message that seems to be the issue
I think it falls back to CPU if GPU can't be loaded
But I can see the recognize/bin/node
process running on my GPU using nvidia-smi
huh
Wondering if it is still the case that cuda 12 is not supported? I have: Driver Version: 560.28.03 CUDA Version: 12.6
I have the same warning message when enabling gpu. I get a process on the GPU of a few hundred megs when I start a scan but no gpu utilization from that process.
Wondering if it is still the case that cuda 12 is not supported? I have: Driver Version: 560.28.03 CUDA Version: 12.6
I have the same warning message when enabling gpu. I get a process on the GPU of a few hundred megs when I start a scan but no gpu utilization from that process.
There is something really wrong with this integration and idk if maintainers are on it...
I had the same issue, and I have sorted it, but with some caveats. Firstly, I am on ubuntu server 22, as this was the latest server when the program was written, which does not appear to have been updated since then. More on that later.
My GPU has CUDA compute 5.2, which is not directly supported by the tensorflow precompiled binarys. So, I had to compile my own. Seven hours on Xeon E3 V2, which was the successful attempt.
I am on the latest Nvidia GPU and CUDA drivers. After installing the CUDA driver from NVIDIAs .run file, I had to manually link some libraries, which is detailed by the installer at the end of the cuda driver install. The NVIDIA FS kernal part always failed, but doesn't seem neccassary. My GPU may not be compatible
Anyway, after all of that, and confirming that tensorflow was working with GPU as per the tensorflow website, recognize still failed.
I found the test_gputensorflow.js in the /nextcloud/apps/recognize/src, and manually ran it from that folder. "sudo node test_gputensorflow.js"
The output indicated it was looking for libcudnn8.so. Ubuntu has moved onto libcudnn9 in the official repositories, but there is a way to manually install it, that I found on:
The guide is for ubuntu 20, but I did some digging around the NVIDIA archive to find a library for libcudnn8.so for ubuntu 22. Sadly, they did not have libcudnn8.so in the ubuntu 24 folder, so It looks like I'm stuck on ubuntu 22 until recognize is updated for libcudnn9.
In any case, I installed it, the message went away, and the recognize job I had running found 10th gear, and took off like a ferrari in a tank race. Nvidia-smi showed 100% unilization by the program.
I hope this helps others out there who are trying to get this working. I'm going to sleep now.
Which version of recognize are you using?
5.0.3
Enabled Modes
Face recognition
TensorFlow mode
GPU mode
Downstream App
Memories App
Which Nextcloud version do you have installed?
27.1.5
Which Operating system do you have installed?
Ubuntu 22.04
Which database are you running Nextcloud on?
Postgres 14.10
Which Docker container are you using to run Nextcloud? (if applicable)
N/A
How much RAM does your server have?
4Gb
What processor Architecture does your CPU have?
x86_64
Describe the Bug
This is minor but Recognize admin panel is telling me no GPU found, but everything seems to be working fine (I see the
recognize/bin/node
process running on my GPU usingnvidia-smi
). Not sure if normal, but although I see the process on my GPU, my CPU usage is also way up.The exact warning appears in the NodeJS section of the admin panel:
Successfully loaded libtensorflow in Node.js, but couldn't load GPU. Make sure CUDA Toolkit and cuDNN are installed and accessible, or turn off GPU mode.
More info: Proxmox 7.2, Nextcloud LXC with GPU successfully passed through (it was already done for processing
ffmpeg
in the Memories app). Installed the CUDA and cuDNN libs through the recommended instructions (pip install tensorflow[and-cuda]
) and python is finding my GPU.Expected Behavior
If everything is working fine and using my GPU, then there shouldn't be any warning about GPU not found.
To Reproduce
Not sure, probably something to do with my setup. If you can point me where to look, I can provide more logs that may help.
Debug log
No response