Closed antithing closed 5 months ago
Thank you for your suggestion. The large memory requirement is mainly caused by loading all the frames to GPU. The patch you mentioned should be able to reduce GPU memory usage, but some modification may be needed to incorporate into our codebase. We will work on it.
Another way to reduce memory usage is to reduce the number of frames used in a model by modifying the "duration" in config file (e.g., from 50 to 20).
Thank you!
One more question, would outward facing overlapping (large overlap) cameras work well? For example
https://us.kandaovr.com/products/obsidian-pro
What if that camera was walked through a space? Assuming I can solve the camera poses correctly, would it still work with your approach? Thanks!
It's a good question. Our method can work for an outward facing camera rig, such as the Google Immersive Dataset. But we haven't yet tried using a 360VR camera rig. A challenge with it is that the number of cameras is smaller (e.g., 8).
Under the moving-camera setting, if the scene is static, then it should be no problem for our method. If the scene is dynamic, it's possible that the rendering quality will be impacted by the limited number of training views at each time step. We believe this problem can be addressed by incorporating additional regularization techniques, which we are planning to investigate further.
Hi, i pushed lazy loading (store image in cpu memory) and store gt image as int8 --data_device cpu --gtisint8 1 you can choose setting based on your device.
Hi! Thank you for making this project available.
Is the high gpu memory requirement directly from the original gaussian splatting implementation?
This PR :
https://github.com/graphdeco-inria/gaussian-splatting/pull/437#issuecomment-1849614741
Seems to lower the requirement, would it be possible to add the same patch here?
Thanks!