fantastic work you are doing! I have a question regarding the live recordings on the HTC Vive HMD and the following pose prediction.
So you trained a Transformer network + MLP and implemented a IK module and FK module. To get a pose from your Transformer network you need a vector in your latent space. How did you generate from the HMD/controllers the vector, such that you were able to generate a pose? And is there any file in your project, such that I can try to replicate your results?
Hello,
fantastic work you are doing! I have a question regarding the live recordings on the HTC Vive HMD and the following pose prediction.
So you trained a Transformer network + MLP and implemented a IK module and FK module. To get a pose from your Transformer network you need a vector in your latent space. How did you generate from the HMD/controllers the vector, such that you were able to generate a pose? And is there any file in your project, such that I can try to replicate your results?
Thanks!