Open SuperStacie opened 1 year ago
+1
Yeah, they didn't specify in the paper that the dataset requires the calculation of pose and some sort of scale for each frame that go into training in Nerf. It looks like these values are calculated somehow from landmarks. Given that their examples have shaky frames, it looks like pose and scale play a key role ?
but according https://github.com/YuelangX/LatentAvatar/blob/main/lib/module/HeadModule.py they use pose+scale only color_mlp in nerf, but not in density_mlp. Perhaps we don't need that embedding?
Hi, thanks for the interesting work!
I wonder if the pose and scale in the dataset are being used during the training and inference. And how do we prepare the data acquire the camera parameters if we wanna try it out using self-captured data?
Thanks
Hello, have you resolved this issue?
+1 Hello, have you resolved this issue?
Hi, thanks for the interesting work!
I wonder if the pose and scale in the dataset are being used during the training and inference. And how do we prepare the data acquire the camera parameters if we wanna try it out using self-captured data?
Thanks