Open xiaozhuge080 opened 2 years ago
Hello, did you run demo.py first and saw the result or just ran "python Visualizations/simple.py"? Anyways, I found the repository initially contains crappy "q_iter_dyn.npy" which will be shown if you only run "python Visualizations/simple.py". I'll update the sample data, thank you for letting me know.
thanks for your reply. I also tried to run demo.py for the dance sample first and then run Visualizations/simple.py. I can't get the result with the ground plane like your youtube video. By the way, how can i get the floor position for my own data like the "sample_data/sample_floor_position.npy"?
hi, I also ran the code with my own data, and I found that there is very little bend in the right elbow for all test videos. Maybe something is wrong, but I don't know why? Can you help me, thank you very much!
hi, I also ran the code with my own data, and I found that there is very little bend in the right elbow for all test videos. Maybe something is wrong, but I don't know why? Can you help me, thank you very much!
Aha, it seems that I have uploaded the wrong version of the trained models. Let me make sure on our end. Apologies for the trouble.
thanks for your reply. I also tried to run demo.py for the dance sample first and then run Visualizations/simple.py. I can't get the result with the ground plane like your youtube video. By the way, how can i get the floor position for my own data like the "sample_data/sample_floor_position.npy"?
If you are recording your own sequences, you can place a checkerboard on the floor, and use it as a world frame when calibrating cameras, i.e., the world frame is where the floor is. For estimating floor positions in a youtube video, you can for example 1) run off the shelf 3D pose estimator (e.g. VNect http://vcai.mpi-inf.mpg.de/projects/VNect/), 2) obtain trajectories of foot 3D positions through a sequence 3) apply plain fitting onto the trajectory. This can give you a rough estimate of a floor position.
If you are recording your own sequences, you can place a checkerboard on the floor, and use it as a world frame when calibrating cameras, i.e., the world frame is where the floor is. For estimating floor positions in a youtube video, you can for example 1) run off the shelf 3D pose estimator (e.g. VNect http://vcai.mpi-inf.mpg.de/projects/VNect/), 2) obtain trajectories of foot 3D positions through a sequence 3) apply plain fitting onto the trajectory. This can give you a rough estimate of a floor position.
Thanks for your answer!
Aha, it seems that I have uploaded the wrong version of the trained models. Let me make sure on our end. Apologies for the trouble.
I am expecting the correct version of the trained models!
I came across the same problem. The reconstructed motion had the foot skating problem. Here is the example I got after running demo.py. And I'm looking forward to trying the correct version of the trained models, and replicating your amazing results. .
@catherineytw Hi, I got same results, have you tried to modify the sample_floor_position.npy
or just use defaults floor position?
I downloaded the rbdl V2.6.0. When I complied like this
mkdir build
cd build/
cmake -D CMAKE_BUILD_TYPE=Release ../
cmake -D RBDL_USE_SIMPLE_MATH=TRUE ../
cmake -D RBDL_BUILD_PYTHON_WRAPPER=TRUE ../
cmake -D RBDL_BUILD_ADDON_URDFREADER=ON ../
make
Something went wrong
code/Neural_Physcap_Demo-master/rbdl-2.6.0/build/python/rbdl-python.cxx:701:10: fatal error: numpy/arrayobject.h: No such file or directory
701 | #include "numpy/arrayobject.h"
| ^~~~~~~~~~~~~~~~~~~~~
compilation terminated.
make[2]: *** [python/CMakeFiles/rbdl-python.dir/build.make:81: python/CMakeFiles/rbdl-python.dir/rbdl-python.cxx.o] Error 1
make[1]: *** [CMakeFiles/Makefile2:200: python/CMakeFiles/rbdl-python.dir/all] Error 2
make: *** [Makefile:156: all] Error 2
I tried some methods, but it did not work. Did you encounter this problem? How did you install this package? Looking forward to reply.
@catalyster try rbdl-orb, compile issue should not be discussed here in my opinion.
@catherineytw Hi, I got same results, have you tried to modify the
sample_floor_position.npy
or just use defaults floor position?
I used the example data and didn't change anything. The character moved strangely, and I had no idea what was wrong. Perhaps the pre-trained model provided in the link was not the best model, as he side in the eariler replies.
Aha, it seems that I have uploaded the wrong version of the trained models. Let me make sure on our end. Apologies for the trouble.
hi, I also ran the code with my own data, and I found that there is very little bend in the right elbow for all test videos. Maybe something is wrong, but I don't know why? Can you help me, thank you very much!
Hi, can you try the updated pre-trained model? You can download the networks from the link in the readme.md. I guess the non-bending elbow issue should be resolved.
I downloaded the rbdl V2.6.0. When I complied like this
mkdir build cd build/ cmake -D CMAKE_BUILD_TYPE=Release ../ cmake -D RBDL_USE_SIMPLE_MATH=TRUE ../ cmake -D RBDL_BUILD_PYTHON_WRAPPER=TRUE ../ cmake -D RBDL_BUILD_ADDON_URDFREADER=ON ../ make
Something went wrong
code/Neural_Physcap_Demo-master/rbdl-2.6.0/build/python/rbdl-python.cxx:701:10: fatal error: numpy/arrayobject.h: No such file or directory 701 | #include "numpy/arrayobject.h" | ^~~~~~~~~~~~~~~~~~~~~ compilation terminated. make[2]: *** [python/CMakeFiles/rbdl-python.dir/build.make:81: python/CMakeFiles/rbdl-python.dir/rbdl-python.cxx.o] Error 1 make[1]: *** [CMakeFiles/Makefile2:200: python/CMakeFiles/rbdl-python.dir/all] Error 2 make: *** [Makefile:156: all] Error 2
I tried some methods, but it did not work. Did you encounter this problem? How did you install this package? Looking forward to reply.
I downloaded the rbdl V2.6.0. When I complied like this
mkdir build cd build/ cmake -D CMAKE_BUILD_TYPE=Release ../ cmake -D RBDL_USE_SIMPLE_MATH=TRUE ../ cmake -D RBDL_BUILD_PYTHON_WRAPPER=TRUE ../ cmake -D RBDL_BUILD_ADDON_URDFREADER=ON ../ make
Something went wrong
code/Neural_Physcap_Demo-master/rbdl-2.6.0/build/python/rbdl-python.cxx:701:10: fatal error: numpy/arrayobject.h: No such file or directory 701 | #include "numpy/arrayobject.h" | ^~~~~~~~~~~~~~~~~~~~~ compilation terminated. make[2]: *** [python/CMakeFiles/rbdl-python.dir/build.make:81: python/CMakeFiles/rbdl-python.dir/rbdl-python.cxx.o] Error 1 make[1]: *** [CMakeFiles/Makefile2:200: python/CMakeFiles/rbdl-python.dir/all] Error 2 make: *** [Makefile:156: all] Error 2
I tried some methods, but it did not work. Did you encounter this problem? How did you install this package? Looking forward to reply.
It seems your system failed to locate numpy. Perhaps you can try to add the path to NumPy explicitly in Makefile.txt. e.g. INCLUDE_DIRECTORIES ( ${PROJECT_SOURCE_DIR}/include ${PROJECT_SOURCE_DIR}/python ${PROJECT_SOURCE_DIR} /PATH/TO/numpy/core/include <---------- add a path here! )
The path to numpy can be obtained from "np.get_include()".
The above is a quick but nasty solution. If it does not solve the issue, please open an issue in rbdl repository (https://github.com/rbdl/rbdl), not here. I can't do anything about rbdl actually.
@soshishimada thanks for your new model, the right-elbow issue seems resolved, but the result still has foot-slide or body-bend issue, will these issues be solved in the future or not?
I forgot the results got influence from the floor position, I will add floor to test if I got spare time .
I have added cam&floor param, the results seems better.
I came across the same problem. The reconstructed motion had the foot skating problem. Here is the example I got after running demo.py. And I'm looking forward to trying the correct version of the trained models, and replicating your amazing results. .
@catherineytw Hi, the new pre-trained models were updated and I found the foot-floor penetration problem. Can you share how do you transform the floor for the correct visulization like your gif?
@catherineytw Hi, the new pre-trained models were updated and I found the foot-floor penetration problem. Can you share how do you transform the floor for the correct visulization like your gif?
The result was generated by the last version pretrained model with default floor transform parameters, I didn't change anything. By the way, I am very curious that how to get the front-faced character, I used the default urdf, and got the back-faced character.
@catherineytw Hi, the new pre-trained models were updated and I found the foot-floor penetration problem. Can you share how do you transform the floor for the correct visulization like your gif?
The result was generated by the last version pretrained model with default floor transform parameters, I didn't change anything. By the way, I am very curious that how to get the front-faced character, I used the default urdf, and got the back-faced character.
You need to modify the RT in the simple.py (https://github.com/soshishimada/Neural_Physcap_Demo/blob/master/Visualizations/simple.py#L23), but the result with the sample_floor_position.npy is still something wrong.
I have added cam&floor param, the results seems better.
Hello,
I tested the model on my own videos, but even if I change the floor values (by estimating the floor parameters as the orientation as the best plane which fits the toes' 3D coordinates along the video) the results still show foot-slide and heavy body-bend issues. On the other side, changing camera values did not bring any changes to the results.
@ykk648 could you explain how you changed the floor and camera coordinates, and how much this change improved your results?
Your help would be much appreciated. Thank you very much!
@violamarconetto
Hi, at first I test a front dancing video, the result seems bad, then I update codes, test on a new indoor video which has calibration got by CharucoBoard, I used board location to act as floor position, here's some results, when I said it's better means better than the dancing video, the body-bend may caused by the camera distortion and the foot-slide issue may caused by shaking keypoints, did no more test after that.
note: the camera is seriously distorted
@violamarconetto
Hi, at first I test a front dancing video, the result seems bad, then I update codes, test on a new indoor video which has calibration got by CharucoBoard, I used board location to act as floor position, here's some results, when I said it's better means better than the dancing video, the body-bend may caused by the camera distortion and the foot-slide issue may caused by shaking keypoints, did no more test after that.
simplescreenrecorder-2022-08-02_18.03.16.mp4 note: the camera is seriously distorted
Thank you very much for your answer.
I am sure that ......
Not sure what you really mean by "because the author's work is to obtain the pose and trans of the model through 2D points." Regarding the manipulation of the character, as written in the paper, we have a corrective force applied on a root joint to prevent the character from falling down. Unfortunately, I'm not able to update the repository atm due to a reason on our end. But I'm happy to have a discussion, and I believe it's much more efficient to chat if you have questions regarding the theory. Please email me (written on our project page) if you wish to talk :)
hi, I have run your visualization code and I can't get the excellent results like your youtube video ( https://www.youtube.com/watch?v=8JhUjzFAMJI&t=327s ), can you share some samples? thank you very much!