Closed coreqode closed 2 years ago
Hi,
Looking at the results I feel it might be the case that you need to adjust these two parameters:
They are being used to filter out empty 3D regions for acceleration purposes. The logic is that if a 3D point x
is too far from your skeleton, it is likely to be in the empty region so we can skip it (set density to zero).
The world_dist
is the threshold we use to filter out points in the world space. It should be larger than the maximum distance between any surface point to its closest skeleton, for any poses.
The cano_dist
is the same thing in the canonical space.
If you are not sure how to set them, simply setting them to a super large value should world == though it would slow down both the training and inference because all the empty regions would still be computed.
Thanks!! It worked.
But somehow, rendering seems pretty blurry for the moving parts. I think this may be because of mismatch in transformations. I use blender to get the transformation for each bone (More specifically I get the rotation_quaternion
in the Blender and convert it to a rotation matrix). Any idea what can be the issue?
Can you also explain about the rest and pose matrices? Also how did you generate the data for the Animal Hare
category?
Hi,
My script of rendering & extracting the pose is here: https://github.com/liruilong940607/blenderlib#rendering-without-shading.
The rest
matrix transform a bone from the bone space to the world space with rest pose (canonical space). And the pose
matrix transform a bone from the bone space to the world space with a pose applied (view space).
The bone space is where a bone's head is located at (0, 0, 0)
and the tail is at (0, y, 0)
in blender.
Hope it helps!
I'm going to close this issue as it seems the only thing you need to do it to adapt the blender code to your own dataset. Feel free to reopen it if you get into obstacles.
Hey!! This is pretty amazing work. I tried to train TAVA on different dataset by making inputs as close as possible to the 'animal hare' category. However, I was unable to get the results.. Is there some parameter which needed to be chosen carefully or anything which needs to be taken care of before training it on different dataset?
Thanks again for this amazing work!!! Hoping for your reply!!