Open Jeckinchen opened 9 months ago
The network assumes input in the BODY25 format :
Because COCO does not have foot fingers, thus making it impossible to track the orientation of the foot!
You can either generate "foot" points close to the heel and pass them in , or you can pass them as zeros in the hope that the neural network will do fine without them.
The order and naming scheme of joints can be found here :
Ammar
The network assumes input in the BODY25 format :
Because COCO does not have foot fingers, thus making it impossible to track the orientation of the foot!
You can either generate "foot" points close to the heel and pass them in , or you can pass them as zeros in the hope that the neural network will do fine without them.
The order and naming scheme of joints can be found here :
Ammar
Thanks for your answer. But I still have some problems. My 2D keypoints estimation results follow the format and order specified at https://github.com/open-mmlab/mmpose/blob/537bd8e543ab463fb55120d5caaa1ae22d6aaf06/configs/_base_/datasets/coco_wholebody.py#L13C20-L13C20 with 133 keypoints. However, I noticed that csvNET.py uses 138 keypoints. How can I convert these 133 keypoints to the 138 keypoints required by the project? Looking forward to your answer. Thank you!
Hello, I am currently using the file "src/python/mnet4/csvNET.py". I noticed that in the file con0014/2dJoints_v1.4.csv downloaded from Google Colab, there seem to be 138 2D keypoints. The results from my 2D pose estimator are in the coco-wholebody format with 133 keypoints. How can I use csvNET.py to obtain the corresponding BVH file?