Closed YiLin32Wang closed 1 year ago
see this issue maybe camera poses are wrong?
Thanks for the quick reply! I've changed the transform_matrix computation as what is suggested in your mentioned issue. But the results are not much improved, but only more dense this time. And I've doublecheck that the revised version still works for dnerf dataset.
https://github.com/hustvl/4DGaussians/assets/91527702/216f899b-a7a0-4be3-8b92-19618bc30467
I've already resolved this problem. It was the world-to-camera matrix that I exported from Blender, which is taken as the camera-to-world matrix by the dataset_readers.py. After I inverted it, the model can converges reasonably.
Hi there,
Thanks for the great work!!
I've been trying to use my own custom data generated from Blender using animation from mixamo and sampling camera viewpoints. Input image frames are like this : I applied the arguments of dnerf dataset directly, and try best to match the setting of dnerf dataset: 800x800 resolution for each frames; sampling sparse camera views for trainset, testset and valset.
But the thing is that during training, the PSNR and L1 Loss stays the same while the densification goes on, either at coarse stage or fine stage.
And the rendered video is not working well:
https://github.com/hustvl/4DGaussians/assets/91527702/2ec17a1c-fee7-40b5-9d0a-04478d8a69b3
I also try fitting only the first frame, and it still got the same issue. The rendered video:
https://github.com/hustvl/4DGaussians/assets/91527702/05dd4090-384e-45e0-b3a9-d54dcaacc7b5
Do you know what might cause the issue?