allo- / virtual_webcam_background

Use a virtual webcam background and overlays with body-pix and v4l2loopback
GNU General Public License v3.0
306 stars 48 forks source link

Documentation on using virtual webcam in Docker #51

Open ghost opened 3 years ago

ghost commented 3 years ago

Allows use for precompiled TF+CUDA+cuDNN that TF hasn't published eg. https://ngc.nvidia.com/catalog/containers/nvidia:tensorflow Some ways to do this are:

  1. use --privileged for docker container to expose /dev/video
  2. use --device for docker container PS: volunteering to write doc, but would like to make sure it's something there's interest in. https://github.com/allo-/virtual_webcam_background/issues/34 could probably be roped in here.
allo- commented 3 years ago

Is there an advantage to use docker? For the python stuff you can (and probably should) use a virtual environment and add can a user account if you want stronger isolation and the cuda driver must be installed on the host anyway. Or am missing something? I currently only see much overhead and not using docker was one of the reasons I created the project instead of using the setup with docker from the blog article, which is linked in the readme.

ghost commented 3 years ago

I haven't observed any measurable overhead to docker. also, using an nvidia docker container means you don't need to install CUDA, cuDNN, Tensorflow and the rest on your machine, just the NVIDIA GPU driver.

essentially with NGC containers you get the cherry picked TF compiled with all optimizations and the latest CUDA+cuDNN without having to deal with the faff of installing, compiling and linking it yourself. I have this running on CUDA 11.1 + cuDNN 8 while current TF can only work well with CUDA 10+cuDNN 7 or else you have to compile it yourself

allo- commented 3 years ago

When you like to write documentation I see no reason not to include it. I just currently do not want to test it, but when you tested your howto and it works for you, it can be included.

I used the debian packages and installed the other libraries to /usr/local/lib/nvidia and it works for me, both with the packages and when using LD_LIBRARY_PATH=/usr/local/lib/nvidia.

I don't think docker has a real CPU overhead. Containers have some RAM overhead (not relevant here as other things require more RAM), but I thought about the complexity. You use containers and quite complex management tools just to run something that works with a virtualenv as well.

ghost commented 3 years ago

Awesome, will write it soon. The complexity doesn't really get in the way once you've mounted your workspace. There are really good flags to pass through the host network and stuff so I've found that it works well. The other really good draw is the isolation from the rest of the system. Eg, I can restrict the container to a certain GPU and a set of resources so the rest of the system doesn't have to contend with Docker without having to set up "nice-ness" values and all that BS :)

Arjdroid commented 3 years ago

Hey, @dsingal0 , have you made any progress with that guide yet? I am also looking to run this software in an nvidia-docker instance and I was wondering whether you had any advice on how to get started because, as you have mentioned, the documentation regarding GPU accelerated virtual-webcam-background implementation is very limited as of now.

Thanks in advance.

ghost commented 3 years ago

@Arjdroid I've stopped using the project since I moved to windows on my personal machines and am using NVIDIA Broadcast now. IIRC though, using this within Docker wasn't too hard, you just needed to either launch it in privileged mode, or pass in the /dev/* devices into the container runtime. Stackoverflow

Arjdroid commented 3 years ago

@Arjdroid I've stopped using the project since I moved to windows on my personal machines and am using NVIDIA Broadcast now. IIRC though, using this within Docker wasn't too hard, you just needed to either launch it in privileged mode, or pass in the /dev/* devices into the container runtime. Stackoverflow

Thank you for your response! I will check it out

ghost commented 3 years ago

@allo- I'm outlining the edits I had to make to get this working with nv-docker. Unfortunately I don't think I'll be able to commit time to creating a PR since some of the steps rely on replacing/adding lines to your own config file that may break non-Docker running. @Arjdroid in order to get this running using nvidia-docker2 you'll want to use the NGC Tensorflow container. Here's a link to a directory where I have 2 scripts to get it running on Ubuntu(18.04-21.04). https://github.com/dsingal0/random/tree/main/vbackground Expanded description:

  1. host.sh: run as sudo ./host.sh basically install v4l2 utils for Ubuntu, creates virtual camera at /dev/video2 and launches TF container. You will most probably have to edit the -v mount to make sure you're passing in the directory as it exists on your machine. The --device might also change if you're using a different virutal camera loopback device
  2. container.sh: run this script once you're in the container. This installs the mesa dependency, clones this project, and replaces the requirements, config file, and main python file. The requirements file is replaced so it doesn't try to re-install TF and Numpy. The config file is replaced with what I was using for resnet+/dev/video2. The main python file is replaced with one with the line os.environ['CUDA_VISIBLE_DEVICES'] = "0" to allow TF to see the GPU within the container. Otherwise it throws a failed call to cuInit: CUDA_ERROR_NO_DEVICE error.
allo- commented 3 years ago

@dsingal0 Can you explain the problem with the config? The config is one of the few things that you should be able to change without changing anything in the source. So can't you just create a docker-run.sh script that creates a config file and then starts the program?

ghost commented 3 years ago

@allo- sorry, I didn't mean that the config was a problem. It's just that the link I provided was to my own repo which had my own config.yaml, which is different from the config.yaml.example from your repo so I wanted to let other readers know that they'd have to make changes to it based on what they're trying to do.

allo- commented 3 years ago

Ok. Just ask when there are changes needed, so we can discuss how to implement them such that they both work in docker and without docker.