Open penghaohsu0222 opened 6 months ago
Hi @penghaohsu0222, Thank you for your feedback. The issue you are encountering may be due to the encapsulation of the original renderer to support dynamic rendering. Currently, this implementation only supports NeRF-synthetic format datasets and does not yet support the COLMAP format. As a workaround, you can save the motion sequence and render it using the original renderer. I have tested this method, and it works successfully.
Thanks a lot for your reply!
torch.save(f_pos, f"./output/{args.task_name}/f_pos.pt")
torch.save(b_pos, f"./output/{args.task_name}/b_pos.pt")
Is it correct to save the motion sequence in these two lines?
Moreover, can you provide more detail about how to render the video using the original renderer in Gaussian Splatting paper to generate the result of the video sequence?
Thank you very much!
I encountered an issue while using my own data. I first converted the data format using convert.py from Gaussian Splatting to an appropriate format and then ran LoopGaussian. However, the output was a completely black video. I have tried using the datasets provided, and they produced successful results. The error only occurs when using my own data. I would like to understand the possible reasons for this issue and am willing to provide my output results for investigation.