Closed fangfufu closed 4 years ago
You're using python2 and Tensorflow 2.x and related packages are python3. In Addition you will probably need python3.7 (even when python3.8 is available).
Try to build a virtual environment for python3.7 like this:
virtualenv -p python3.7 venv
source venv/bin/activate
and then run the installation commands again.
Still not working. I created a Python 3.7 environment under Conda.
(webcam) fangfufu@smithsonian:~/src/virtual_webcam_background$ pip install "git+https://github.com/patlevin/tfjs-to-tf.git@v0.5.0"
Collecting git+https://github.com/patlevin/tfjs-to-tf.git@v0.5.0
Cloning https://github.com/patlevin/tfjs-to-tf.git (to revision v0.5.0) to /tmp/pip-req-build-fvwum7ie
Running command git clone -q https://github.com/patlevin/tfjs-to-tf.git /tmp/pip-req-build-fvwum7ie
ERROR: Command errored out with exit status 1:
command: /home/fangfufu/anaconda3/envs/webcam/bin/python -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-req-build-fvwum7ie/setup.py'"'"'; __file__='"'"'/tmp/pip-req-build-fvwum7ie/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /tmp/pip-req-build-fvwum7ie/pip-egg-info
cwd: /tmp/pip-req-build-fvwum7ie/
Complete output (31 lines):
/home/fangfufu/.local/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:516: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
/home/fangfufu/.local/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:517: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/home/fangfufu/.local/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:518: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
/home/fangfufu/.local/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:519: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/home/fangfufu/.local/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:520: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
/home/fangfufu/.local/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/tmp/pip-req-build-fvwum7ie/setup.py", line 2, in <module>
from tfjs_graph_converter.version import VERSION
File "/tmp/pip-req-build-fvwum7ie/tfjs_graph_converter/__init__.py", line 6, in <module>
from tfjs_graph_converter import api
File "/tmp/pip-req-build-fvwum7ie/tfjs_graph_converter/api.py", line 14, in <module>
import tensorflowjs as tfjs
File "/home/fangfufu/anaconda3/envs/webcam/lib/python3.7/site-packages/tensorflowjs/__init__.py", line 21, in <module>
from tensorflowjs import converters
File "/home/fangfufu/anaconda3/envs/webcam/lib/python3.7/site-packages/tensorflowjs/converters/__init__.py", line 24, in <module>
from tensorflowjs.converters.tf_saved_model_conversion_v2 import convert_tf_saved_model
File "/home/fangfufu/anaconda3/envs/webcam/lib/python3.7/site-packages/tensorflowjs/converters/tf_saved_model_conversion_v2.py", line 37, in <module>
import tensorflow_hub as hub
File "/home/fangfufu/anaconda3/envs/webcam/lib/python3.7/site-packages/tensorflow_hub/__init__.py", line 29, in <module>
from tensorflow_hub.estimator import LatestModuleExporter
File "/home/fangfufu/anaconda3/envs/webcam/lib/python3.7/site-packages/tensorflow_hub/estimator.py", line 64, in <module>
class LatestModuleExporter(tf_v1.estimator.Exporter):
AttributeError: module 'tensorflow_hub.tf_v1' has no attribute 'estimator'
----------------------------------------
ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.
(webcam) fangfufu@smithsonian:~/src/virtual_webcam_background$ pip --version
pip 20.0.2 from /home/fangfufu/anaconda3/envs/webcam/lib/python3.7/site-packages/pip (python 3.7)
What did you exactly install in the conda environment?
The order must be:
tensorflow
)tensorflowjs
tfjs-to-tf
because tfjs-to-tf had some packaging issues, see #3.
My webcam
environment was created by conda create -n webcam python=3.7
. requirements.txt
and tensorflowjs
installed fine.
I am going to give vanilla python 3.7 virtualenv a go in a bit to see if that works.
I think you should update the README to tell people to install python 3.7 in a virtualenv.
So it is working for you now?
I think most of this will probably be solved when we have a proper setup.py.
It is working now. closing the issue.
I just want to say your software works great. The thresholds need a lot of tweaking - I set it to 0.25, and it ends up classifying my lightbulb as a person. The default setup is not covering my full body.
0.25 is a total oversegmentation here and I use 0.7-0.8. I guess it depends on many factors like lighting and your environment.
What webcam resolution do you use? I have 1280x720 configured and OpenCV defaults to 640x480 on my cam. I guess the aspect ratio (and with this the padding of the image used for the detection) has quite a bit of influence as well.
And sometimes the detection seems to depend on your clothes. A dark shirt with one color works better than a colorful one with complex patterns.
When you notice any patters it would be interesting to compile a table what parameters work for which environment.
You can also try other models and multipliers. To get the other models use simple_bodypix_python/get_model.sh and then set internal_resolution
in the source code to 0.25, 0.5 (current model), 0.75 or 1.0 and output_stride
to 16 or 8.
When you get significantly better results for some combination you can open another bug for it, so we can track which combinations work well.
What do internal_resolution
and output_stride
do? I use 1280x720 as well. Is there any ways to scale the image going into the network. When I set my webcam's resolution to 640x480, the performance was quite good, but when I set it to 1280x720, it is quite bad. In my own implementation, I scale down the image going into the network, and scale up the mask.
Internal Resolution and Stride are parameters of the models and you need to use the matching model for them. In the images in https://github.com/ajaichemmanam/simple_bodypix_python you see how the image generated by the model looks like before scaling. And the stride parameter is visualized here: https://medium.com/machine-learning-algorithms/what-is-stride-in-convolutional-neural-network-e3b4ae9baedb
In principle should more internal resolution and less stride give a better result, but in my experiments the parameters in the script worked the best. When you need very different parameters they could be added as config options.
The parameters in your example config file definitely does not work the best in my case. You know the blogpost that Google engineer posted. Have you tried his parameters? How did it perform?
He is using internal_resolution=0.5 "medium" (see utils.ts Line 86) and multiplier=0.75 with a threshold of 0.75.
I tend to get a undersegmentation around the body with his parameters and spots at the left and right.
@fangfufu Would you like to test the halo branch for testing the threshold?
For some reason the heatmap needs a threshold of 0.999 for face detection for me and I am curious if this value works for you as well.
With the branch you should see a halo over your head. Then set debug_show_mask
to 0
and then to 1
for detecting the left/right part of your face.
I will do that later. :)
By the way, can you scale down the image sent to bodypix?
How do I specify internal_resolution
then? Can I change it in the config file? It is not documented (yet?).
You set it in the source code as multiplier for the real resolution, e.g. 1.0 or 0.5.
I first thought it is coupled to the multiplier (model dependent), but it seems that you can set it like you want. 1.0 (100%) means in principle best quality and slowest processing, but in practice setting some of the parameters to another value can sometimes improve the result.
Let's collect good parameters here: https://github.com/allo-/virtual_webcam_background/wiki/model-parameters
I couldn't get the halo branch running.
(webcam) fangfufu@smithsonian:~/src/virtual_webcam_background$ python virtual_webcam.py
Loading model...
done.
Loading images background.jpg ...
Finished loading background
Loading images podium-only-transparent.png ...
Finished loading background
776
(100, 200, 4)
Traceback (most recent call last):
File "virtual_webcam.py", line 392, in <module>
mainloop()
File "virtual_webcam.py", line 383, in mainloop
frame[:,:,0] = mask * 255
ValueError: could not broadcast input array from shape (720,1280,0,24) into shape (720,1280)
The normal branch still works
I am not sure if you tried the correct version, I rebased the branch quite some times. Try to reset --hard to some early state and pull the latest version.
I know it sounds silly, but I can't install tensorflowjs... You might want to specify which version you want. I am Debian Buster, btw.
I also had problem with installing tfjs-to-tf.