Closed ge35tay closed 1 year ago
Hi! Thanks for your great job again! It is really a cool idea to integrate the semantic in the nerf model. And I meet some trouble in the visualization, I can train perfectly now on my computer, but when I try to visualize the mesh there is always error of CUDA out of memory, i try to reduce the chunk and the netchunk to 8 but it still occurs, did you have any idea? FYI I notice that in the visualization you use another test.yaml, is it possible that you share this test.yaml?
Hi.
The config file should be the same as the one you used at training time.
It is wierd to have OOM for mesh extraction when the chunk parameter in render_fn is low as the rendered results are incrementally moved to cpu here move_tensor_to_cpu. Could you please chec if the results are shifted to cpy memomy so that your GPU memory usage would be controlled by the chunk params during rendering.
Close it for now. Feel free to re-open it if you have any further questions.
Hi! Thanks for your great job again! It is really a cool idea to integrate the semantic in the nerf model. And I meet some trouble in the visualization, I can train perfectly now on my computer, but when I try to visualize the mesh there is always error of CUDA out of memory, i try to reduce the chunk and the netchunk to 8 but it still occurs, did you have any idea? FYI I notice that in the visualization you use another test.yaml, is it possible that you share this test.yaml?