Open Ishihara-Masabumi opened 1 year ago
Moreover, the video Almost doesn't move.
The 3D points output window gets created here however depending on the visibility of the person there are a lot of "if" statements that might make it not appear.
Please remember that this repository currently hosts a body pose estimation method (in the master branch) and a body+hands body pose estimation method ( in the mnet3 branch) and your input image just shows a head, so this is the reason why it does not output something since there is no view of the body.
In regards to the framerate, to achieve the maximum framerate the demo in this repository is programmed to use the GPU for 2D joint estimation and the CPU for 3D (possibly in a multithreaded configuration) so this combination utilizes system resources in the best way possible. If your framerate is low this is most probably caused by having a CPU-only tensorflow library that handles all of the processing in CPU and using the heavier "--openpose" 2D joint estimator.
Trying with the --forth 2D joint estimator : ./MocapNET2LiveWebcamDemo --from /dev/video0 --forth
will work with a decent framerate (albeit with lower quality 2D input that will result in lower quality 3D output) even when all of the computations are conducted on the CPU.
When I run the following command, the additional window has no image as below. ./MocapNET2LiveWebcamDemo --from /dev/video0 --live