Open AntouanK opened 1 year ago
after installing/uninstalling packages I got it to run somehow, with ( I think ) GPU on tensorflow. ( I'm clueless on python and its packaging system )
I get this error now, the one the troubleshooting refers to.
I tried the command it gives ( with and without sudo
) but it never creates a new device.
the /dev/video2
I already have, I got it when I tried this repo
The command he suggested was
$ sudo modprobe v4l2loopback devices=1 exclusive_caps=1 video_nr=2 card_label="fake-cam"
and it seems like it persisted even when I logout/login.
Your user account needs read/write access to the devices. Depending on your Distribution they should be owned by group video
and you can add your user account to the group. The less secure option is to use chmod 666
and allow everyone to read and write the devices.
Also make sure to use the right input and output devices. Your cam seems to register two video devices (0, 1) and probably only one of them is usable. Try with a video player if it can be used.
you're right.
adding myself to the video
group solved it.
my normal webcam is /dev/video0 , I checked with vlc.
I get this error now.
❯ python ./virtual_webcam.py
Num GPUs Available: 1
Traceback (most recent call last):
File "/run/media/antouank/evo1/_REPOS_/virtual_webcam_background/./virtual_webcam.py", line 17, in <module>
import tfjs_graph_converter.api as tfjs_api
ModuleNotFoundError: No module named 'tfjs_graph_converter'
:)
if I do
╰─ pip install tfjs-graph-converter
then I end back in the previous error again.
❯ python ./virtual_webcam.py
Num GPUs Available: 1
INFO: Created TensorFlow Lite XNNPACK delegate for CPU.
Reloading config.
Traceback (most recent call last):
File "/run/media/antouank/evo1/_REPOS_/virtual_webcam_background/./virtual_webcam.py", line 123, in <module>
fakewebcam = FakeWebcam(config.get("virtual_video_device"), width, height)
File "/home/antouank/.local/lib/python3.10/site-packages/pyfakewebcam/pyfakewebcam.py", line 54, in __init__
fcntl.ioctl(self._video_device, _v4l2.VIDIOC_S_FMT, self._settings)
OSError: [Errno 22] Invalid argument
And I am in the video
group this time.
also, why does it keep switching to the CPU?
INFO: Created TensorFlow Lite XNNPACK delegate for CPU.
how can I make it use the GPU? ( if it eventually runs )
I already have the cuda packages installed.
What else would I need?
Today, the script seems to work fine. I made the video10 device and it loads up right away. I guess that video2 I had was problematic and I couldn't make a new one for some reason.
The GPU is still an issue though.
I get this when the script starts :
❯ python ./virtual_webcam.py
2022-12-13 08:48:30.190135: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2022-12-13 08:48:31.118689: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory
2022-12-13 08:48:31.118758: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory
2022-12-13 08:48:31.118769: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
INFO: Created TensorFlow Lite XNNPACK delegate for CPU.
Reloading config.
Model: mobilenet (multiplier=0.5, stride=16)
Loading model...
done.
It's very very slow on CPU. Like 4-5 fps. I have an nvidia 4090 so I'd like to make use of it. What can I do to make the script see the GPU?
thank you.
PS I tried to read the TF documentation and the test command shows that the library is seeing my GPU.
❯ python -c "import tensorflow as tf; print(tf.config.list_physical_devices('GPU'))"
2022-12-13 09:54:13.643644: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2022-12-13 09:54:14.510984: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: :/home/antouank/.conda/envs/virtual-webcam/lib/
2022-12-13 09:54:14.511351: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: :/home/antouank/.conda/envs/virtual-webcam/lib/
2022-12-13 09:54:14.511366: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
[PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]
But then why is the script switching to the CPU?
Have a look at tensorflow tutorials for your platform. It isn't always easy to get the right versions.
You need tensorflow-gpu
and it has to match the installed CUDA version. Different python versions only support certain tensorflow versions, so one may have to try some combinations until it works.
Your errors look like you're having a GPU enabled tensorflow, but not the nvidia libraries for neural networks.
https://www.tensorflow.org/install/pip#software_requirements
@allo- thanks for the response.
I read this page 3-4 times by now. Unfortunately, the steps it gives is not helping with the cuda nn issue. Any idea how I can debug that one? Maybe how to see what versions I have installed or what I can try to install? I googled it a lot but I cannot see any specific example. :/
You should be able to get much of the stuff from your linux distribution, but I think for CuDNN and some others you need a download from nvidia.com that requires an account (but you can use a trashmail for it, it just needs the account for downloading).
I already got cudnn installed.
And I tried cudnn8-cuda11.0 (#9) but it fails to build it.
What I'm trying now is this command
conda install -c conda-forge cudatoolkit=11.2 cudnn=8.1.0
It literally has been going on for almost 3 hours.
I don't know what it's doing :joy:
PS done after 3+ hours. and of course, the script has the same output :cry:
I know it's a mess to get the right versions and I don't know a good advice for your system either. Looks at basic tutorials and FAQs and try to get the stuff together or maybe ask in some help forums for cuda/tensorflow or general deep learning things.
When you find a definite guide I'm happy to link to it, but I don't think I know what to recommend except what's on the tensorflow homepage myself.
Depending on your system it could install wheel packages, but when it builds from source probably something on your system isn't supported.
The versions in my current virtual environment are:
Python 3.8.2
tensorflow==2.4.4
tensorflow-estimator==2.4.0
tensorflow-hub==0.9.0
tensorflowjs==3.3.0
how do you normally set it up? let's say you just cloned the repo on a linux machine. do you use conda? or pip, or something else? maybe I can try to wipe all the packages etc I got, clone again and follow the same steps you did.
I use a virtual enviroment and installed the packages with pip install -r requirements.txt
. The dependencies should be installed automatically. Depending on the tensorflow version you may need a different numpy version.
@allo- After a long rabbithole I managed to get a mediapipe/bazel/selfie_segmentation build that runs locally and I can see myself in 60fps and the background replaced using the GPU. I got some help from here.
The issue now is that I have no clue how to use that binary/graph to redirect the output to a fake webcam video device ( or how to configure the input/output in general )
And idea? I've been googling everything all morning but I cannot see any example to understand how to connect it with what I have built.
When you use the virtual webcam background program with mediapipe you configure video devices like when using resnet/mobilenet and only cannot use many of the plugins, but segmentation should work as good as with other mediapipe codes.
The standard is v4l2loopback for the video device, but akvcam would be a more modern solution, only the configuration is more involved. See #33 for discussion and my example config.
I get this error when trying to run
python ./virtual_webcam.py
more specifically