Closed schwert26 closed 5 months ago
Hi, Thanks for your interest. For the EndoNeRF dataset, it is a single-view point scene which means that the camera is fixed, and the scene is dynamic. While for the SCARED dataset, the camera is moving, and the scene is static. Here we typically follow the setting of previous works including EndoNeRF and EndoSurf.
Currently, we only have these two public datasets. If there exist datasets with both the moving camera and dynamic scenes, we may explore more challenging but interesting problems.
After training, we can observe the optimized Gaussians from any perspective or trajectories. You can have a try and I think the visual effect is not good as the fixed view for EndoNeRF dataset.
Hi, How can we observe the optimized Gaussians from any perspective or trajectories?
Hi,
After optimizing, we can observe the Gaussians using any self-customized viewpoint. To achieve this, you can change the current video_cameras in the endo_loader.py
, define the own camera viewpoint for each camera of video cameras, and call the render function. We may achieve this function in the future.
Another convenient way is to observe the reconstructed point cloud from render.py
, where 3D softwares like MeshLab or Open3D can be easily used to control the viewpoint.
You've been incredibly helpful, thank you.And so sorry for my late response,I will try it later. Thank you again!
Hello,
I hope this message finds you well. I have a question regarding the camera trajectories in ground truth and trained videos. Why are the camera trajectories identical between them? Is it possible to switch the camera perspectives? I've observed videos from other 3DGS where the output trajectory can be specified. I'm curious about this discrepancy and would appreciate any clarification you could provide.
Best regards.