allo- / virtual_webcam_background

Use a virtual webcam background and overlays with body-pix and v4l2loopback
GNU General Public License v3.0
307 stars 47 forks source link

Error during installation #10

Closed fangfufu closed 4 years ago

fangfufu commented 4 years ago

I know it sounds silly, but I can't install tensorflowjs... You might want to specify which version you want. I am Debian Buster, btw.

$ pip install tensorflowjs
Collecting tensorflowjs
Downloading https://files.pythonhosted.org/packages/9d/30/fd751b8f1c60fe95a4d19e704dec33203aaf60b501ba61c88073562cd5fc/tensorflowjs-1.7.2-py2-none-any.whl (57kB)
    100% |████████████████████████████████| 61kB 763kB/s 
Collecting numpy>=1.16.4 (from tensorflowjs)
Downloading https://files.pythonhosted.org/packages/3a/5f/47e578b3ae79e2624e205445ab77a1848acdaa2929a00eeef6b16eaaeb20/numpy-1.16.6-cp27-cp27mu-manylinux1_x86_64.whl (17.0MB)
    100% |████████████████████████████████| 17.0MB 87kB/s 
Collecting h5py>=2.8.0 (from tensorflowjs)
Downloading https://files.pythonhosted.org/packages/12/90/3216b8f6d69905a320352a9ca6802a8e39fdb1cd93133c3d4163db8d5f19/h5py-2.10.0-cp27-cp27mu-manylinux1_x86_64.whl (2.8MB)
    100% |████████████████████████████████| 2.8MB 540kB/s 
Collecting tensorflow-cpu==2.1.0 (from tensorflowjs)
Could not find a version that satisfies the requirement tensorflow-cpu==2.1.0 (from tensorflowjs) (from versions: )
No matching distribution found for tensorflow-cpu==2.1.0 (from tensorflowjs)

I also had problem with installing tfjs-to-tf.

pip install "git+https://github.com/patlevin/tfjs-to-tf.git@v0.5.0"
Collecting git+https://github.com/patlevin/tfjs-to-tf.git@v0.5.0
Cloning https://github.com/patlevin/tfjs-to-tf.git (to revision v0.5.0) to /tmp/pip-req-build-C6Hk_F
    Complete output from command python setup.py egg_info:
    Traceback (most recent call last):
    File "<string>", line 1, in <module>
    File "/tmp/pip-req-build-C6Hk_F/setup.py", line 2, in <module>
        from tfjs_graph_converter.version import VERSION
    File "tfjs_graph_converter/__init__.py", line 6, in <module>
        from tfjs_graph_converter import api
    File "tfjs_graph_converter/api.py", line 2
    SyntaxError: Non-ASCII character '\xc2' in file tfjs_graph_converter/api.py on line 2, but no encoding declared; see http://python.org/dev/peps/pep-0263/ for details

    ----------------------------------------
Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-req-build-C6Hk_F/
allo- commented 4 years ago

You're using python2 and Tensorflow 2.x and related packages are python3. In Addition you will probably need python3.7 (even when python3.8 is available).

Try to build a virtual environment for python3.7 like this:

virtualenv -p python3.7 venv
source venv/bin/activate

and then run the installation commands again.

fangfufu commented 4 years ago

Still not working. I created a Python 3.7 environment under Conda.

(webcam) fangfufu@smithsonian:~/src/virtual_webcam_background$ pip install "git+https://github.com/patlevin/tfjs-to-tf.git@v0.5.0"
Collecting git+https://github.com/patlevin/tfjs-to-tf.git@v0.5.0
Cloning https://github.com/patlevin/tfjs-to-tf.git (to revision v0.5.0) to /tmp/pip-req-build-fvwum7ie
Running command git clone -q https://github.com/patlevin/tfjs-to-tf.git /tmp/pip-req-build-fvwum7ie
    ERROR: Command errored out with exit status 1:
    command: /home/fangfufu/anaconda3/envs/webcam/bin/python -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-req-build-fvwum7ie/setup.py'"'"'; __file__='"'"'/tmp/pip-req-build-fvwum7ie/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /tmp/pip-req-build-fvwum7ie/pip-egg-info
        cwd: /tmp/pip-req-build-fvwum7ie/
    Complete output (31 lines):
    /home/fangfufu/.local/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:516: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
    _np_qint8 = np.dtype([("qint8", np.int8, 1)])
    /home/fangfufu/.local/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:517: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
    _np_quint8 = np.dtype([("quint8", np.uint8, 1)])
    /home/fangfufu/.local/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:518: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
    _np_qint16 = np.dtype([("qint16", np.int16, 1)])
    /home/fangfufu/.local/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:519: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
    _np_quint16 = np.dtype([("quint16", np.uint16, 1)])
    /home/fangfufu/.local/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:520: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
    _np_qint32 = np.dtype([("qint32", np.int32, 1)])
    /home/fangfufu/.local/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
    np_resource = np.dtype([("resource", np.ubyte, 1)])
    Traceback (most recent call last):
    File "<string>", line 1, in <module>
    File "/tmp/pip-req-build-fvwum7ie/setup.py", line 2, in <module>
        from tfjs_graph_converter.version import VERSION
    File "/tmp/pip-req-build-fvwum7ie/tfjs_graph_converter/__init__.py", line 6, in <module>
        from tfjs_graph_converter import api
    File "/tmp/pip-req-build-fvwum7ie/tfjs_graph_converter/api.py", line 14, in <module>
        import tensorflowjs as tfjs
    File "/home/fangfufu/anaconda3/envs/webcam/lib/python3.7/site-packages/tensorflowjs/__init__.py", line 21, in <module>
        from tensorflowjs import converters
    File "/home/fangfufu/anaconda3/envs/webcam/lib/python3.7/site-packages/tensorflowjs/converters/__init__.py", line 24, in <module>
        from tensorflowjs.converters.tf_saved_model_conversion_v2 import convert_tf_saved_model
    File "/home/fangfufu/anaconda3/envs/webcam/lib/python3.7/site-packages/tensorflowjs/converters/tf_saved_model_conversion_v2.py", line 37, in <module>
        import tensorflow_hub as hub
    File "/home/fangfufu/anaconda3/envs/webcam/lib/python3.7/site-packages/tensorflow_hub/__init__.py", line 29, in <module>
        from tensorflow_hub.estimator import LatestModuleExporter
    File "/home/fangfufu/anaconda3/envs/webcam/lib/python3.7/site-packages/tensorflow_hub/estimator.py", line 64, in <module>
        class LatestModuleExporter(tf_v1.estimator.Exporter):
    AttributeError: module 'tensorflow_hub.tf_v1' has no attribute 'estimator'
    ----------------------------------------
ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.
(webcam) fangfufu@smithsonian:~/src/virtual_webcam_background$ pip --version
pip 20.0.2 from /home/fangfufu/anaconda3/envs/webcam/lib/python3.7/site-packages/pip (python 3.7)
allo- commented 4 years ago

What did you exactly install in the conda environment?

The order must be:

because tfjs-to-tf had some packaging issues, see #3.

fangfufu commented 4 years ago

My webcam environment was created by conda create -n webcam python=3.7. requirements.txt and tensorflowjs installed fine.

I am going to give vanilla python 3.7 virtualenv a go in a bit to see if that works.

I think you should update the README to tell people to install python 3.7 in a virtualenv.

allo- commented 4 years ago

So it is working for you now?

I think most of this will probably be solved when we have a proper setup.py.

fangfufu commented 4 years ago

It is working now. closing the issue.

fangfufu commented 4 years ago

I just want to say your software works great. The thresholds need a lot of tweaking - I set it to 0.25, and it ends up classifying my lightbulb as a person. The default setup is not covering my full body.

allo- commented 4 years ago

0.25 is a total oversegmentation here and I use 0.7-0.8. I guess it depends on many factors like lighting and your environment.

What webcam resolution do you use? I have 1280x720 configured and OpenCV defaults to 640x480 on my cam. I guess the aspect ratio (and with this the padding of the image used for the detection) has quite a bit of influence as well.

And sometimes the detection seems to depend on your clothes. A dark shirt with one color works better than a colorful one with complex patterns.

When you notice any patters it would be interesting to compile a table what parameters work for which environment.

You can also try other models and multipliers. To get the other models use simple_bodypix_python/get_model.sh and then set internal_resolution in the source code to 0.25, 0.5 (current model), 0.75 or 1.0 and output_stride to 16 or 8.

When you get significantly better results for some combination you can open another bug for it, so we can track which combinations work well.

fangfufu commented 4 years ago

What do internal_resolution and output_stride do? I use 1280x720 as well. Is there any ways to scale the image going into the network. When I set my webcam's resolution to 640x480, the performance was quite good, but when I set it to 1280x720, it is quite bad. In my own implementation, I scale down the image going into the network, and scale up the mask.

allo- commented 4 years ago

Internal Resolution and Stride are parameters of the models and you need to use the matching model for them. In the images in https://github.com/ajaichemmanam/simple_bodypix_python you see how the image generated by the model looks like before scaling. And the stride parameter is visualized here: https://medium.com/machine-learning-algorithms/what-is-stride-in-convolutional-neural-network-e3b4ae9baedb

In principle should more internal resolution and less stride give a better result, but in my experiments the parameters in the script worked the best. When you need very different parameters they could be added as config options.

fangfufu commented 4 years ago

The parameters in your example config file definitely does not work the best in my case. You know the blogpost that Google engineer posted. Have you tried his parameters? How did it perform?

allo- commented 4 years ago

He is using internal_resolution=0.5 "medium" (see utils.ts Line 86) and multiplier=0.75 with a threshold of 0.75.

I tend to get a undersegmentation around the body with his parameters and spots at the left and right.

allo- commented 4 years ago

@fangfufu Would you like to test the halo branch for testing the threshold?

For some reason the heatmap needs a threshold of 0.999 for face detection for me and I am curious if this value works for you as well.

With the branch you should see a halo over your head. Then set debug_show_mask to 0 and then to 1 for detecting the left/right part of your face.

fangfufu commented 4 years ago

I will do that later. :)

fangfufu commented 4 years ago

By the way, can you scale down the image sent to bodypix?

allo- commented 4 years ago

This is done here: https://github.com/allo-/virtual_webcam_background/blob/d5dcb413f25048fed0b96f7db851f9e0d280b9b0/bodypix_functions.py#L35

fangfufu commented 4 years ago

How do I specify internal_resolution then? Can I change it in the config file? It is not documented (yet?).

allo- commented 4 years ago

You set it in the source code as multiplier for the real resolution, e.g. 1.0 or 0.5.

I first thought it is coupled to the multiplier (model dependent), but it seems that you can set it like you want. 1.0 (100%) means in principle best quality and slowest processing, but in practice setting some of the parameters to another value can sometimes improve the result.

Let's collect good parameters here: https://github.com/allo-/virtual_webcam_background/wiki/model-parameters

fangfufu commented 4 years ago

I couldn't get the halo branch running.

(webcam) fangfufu@smithsonian:~/src/virtual_webcam_background$ python virtual_webcam.py
Loading model...
done.
Loading images background.jpg ...
Finished loading background
Loading images podium-only-transparent.png ...
Finished loading background
776
(100, 200, 4)
Traceback (most recent call last):
File "virtual_webcam.py", line 392, in <module>
    mainloop()
File "virtual_webcam.py", line 383, in mainloop
    frame[:,:,0] = mask * 255
ValueError: could not broadcast input array from shape (720,1280,0,24) into shape (720,1280)

The normal branch still works

allo- commented 4 years ago

I am not sure if you tried the correct version, I rebased the branch quite some times. Try to reset --hard to some early state and pull the latest version.