allo- / virtual_webcam_background

Use a virtual webcam background and overlays with body-pix and v4l2loopback
GNU General Public License v3.0
306 stars 48 forks source link

Virtual Webcam Background

Virtual Webcam Background allows you to use a virtual background or to blur the background of your webcam image similar to commercial programs like Zoom.

Tensorflow with BodyPix is used to segment the image into foreground (person) and background using a neural network and v4l2loopback is used to create a virtual webcam.

As the script creates a virtual webcam device it works with any program that can use a v4l2 webcam.

See virtual-webcam.com for more information, image packs and more.

Installation

The program needs python 3.5–3.8 and is tested with python 3.7.

Install the requirements:

pip install -r requirements.txt

Download the bodypix model:

./get-model.sh

Then install v4l2loopback and load the kernel module:

modprobe v4l2loopback exclusive_caps=1 video_nr=2 # creates /dev/video2

The exclusive_caps option is needed by some programs, such as chromium.

Then copy config.yaml.example to config.yaml and edit the config as needed and run the virtual_webcam.py script.

If you have a Nvidia graphics card, you may want to install CUDA for better performance.

Python version

Due to the dependencies, the script does not work with tf-nightly. This means you need a python version that is supported by the stable tensorflow packages. When your distribution, for example, only ships python3.9, you need to install python3.8 yourself. You can have a look at this HowTo for installing Python 3.8 on Debian 10.

Troubleshooting

Problems opening the video device

Problems like this

  File "./virtual_webcam.py", line 101, in <module>
    fakewebcam = FakeWebcam(config.get("virtual_video_device"), width, height)
  File "my_venv/lib/python3.8/site-packages/pyfakewebcam/pyfakewebcam.py", line 54, in __init__
    fcntl.ioctl(self._video_device, _v4l2.VIDIOC_S_FMT, self._settings)
OSError: [Errno 22] Invalid argument

mean in most cases, that you're using the wrong video devices. Try to play the stream from the real device using a media player. You can make sure you recognize the loopback device by choosing a high number when loading the kernel module:

modprobe v4l2loopback video_nr=10 exclusive_caps=1 # Creates /dev/video10

Then your webcam will most likely have the device /dev/video0 and the fake cam the device /dev/video10.

Problems using the fake camera in chromium-based browsers

Make sure to load the v4l2loopback kernel module with the exclusive_caps=1 parameter.

Animated GIF support

Animated gifs do NOT work as a background with the image filter, but they work with the video filter.

Configuration

To configure the virtual webcam, edit config.yaml. Most options are applied instantly, except for width and height as the webcam must be reinitialized to change them and multiplier and stride as the model must be reloaded to apply them.

Note: Input width and height are autodetected when they are not set in the config, but this can lead to bad default values, e.g., 640x480 even when the camera supports a resolution of 1280x720.

Layers

The layers option contains one image source and a list of filters. The image sources are:

Each layer has a list of filters, that are applied in the given order. After all filters are applied, the layer is merged with the previous layers.

Filters

Each layer has a list of filters. A simple example that converts the background to grayscale and blurs it looks like this:

  - input: ["grayscale", "blur"]

Some filters have arguments. To change the blur value in the filter list above, you can use onf of these syntax variants:

Example Layers

A virtual background

- layers:
  - empty: [["image", "background.jpg"]]
  - foreground: []

Blurred background

- layers:
  - input: [["blur", 10]]
  - foreground: []

Blurred background and a moving fog overlay

- layers:
  - input: [["blur", 10]]
  - foreground: []
  - previous: [["image", "images/fog.jpg"], ["roll", 5, 0]]

Filters

The current filters and their options are:

Videos

If you have a video, you can use the video filter:

- "empty": [["video", "my-video.mp4"]]

Image Sequences

Another option are image sequences, that allow for example to use transparent PNGs.

Example

Example config for loading an image sequence from the folder "animation" and playing it with 5 frames per second:

- empty: [["image_sequence", "frames", 5]]

The program tries to load frames/*.* and you need to make sure that the folder only contains images and that the images are ordered correctly when they are sorted by filename.

Example for creating an image sequence from a short video and adding alpha transparency for a green screen effect using ffmpeg and ImageMagick:

mkdir animation
cd animation
ffmpeg -i ../animation.webm -vf fps=10 out%04d.png
mogrify -fuzz 10% -transparent 'rgb(0,129,27)' *

When using the ffmpeg command, you can change the output framerate using the fps parameter.

Note that the script loads all images of an animation into RAM scaled to the resolution of your webcam, so using too long animations is not a good idea.

Advanced

To download other models get the full get-model.sh script from https://github.com/ajaichemmanam/simple_bodypix_python and run it with one of these combinations:

./get-model.sh bodypix/mobilenet/float/{025,050,075,100}/model-stride{8,16}
./get-model.sh bodypix/resnet50/float/model-stride{16,32}

Example config for mobilenet:

- model: mobilenet
- multiplier: 0.5
- stride: 16

Example config for resnet50:

- model: resnet50
- stride: 16

License

Acknowledgements