sunset1995 / DirectVoxGO

Direct voxel grid optimization for fast radiance field reconstruction.
https://sunset1995.github.io/dvgo
Other
1.05k stars 110 forks source link

Run process always killed #40

Closed x0s closed 2 years ago

x0s commented 2 years ago

Hi,

Thanks for sharing your work, I am getting some trouble executing your script with success using nerf_synthetic dataset as described in README, process seems to be killed because there is not enough memory, but not sure if it's VRAM or RAM. Similar output with Evulation or video rendering commands. Here is the output. Do you have any idea how to reduce memory consumption ? Thanks

Using /home/x0s/.cache/torch_extensions/py310_cu102 as PyTorch extensions root...
Detected CUDA files, patching ldflags
Emitting ninja build file /home/x0s/.cache/torch_extensions/py310_cu102/adam_upd_cuda/build.ninja...
Building extension module adam_upd_cuda...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
ninja: no work to do.
Loading extension module adam_upd_cuda...
Using /home/x0s/.cache/torch_extensions/py310_cu102 as PyTorch extensions root...
Detected CUDA files, patching ldflags
Emitting ninja build file /home/x0s/.cache/torch_extensions/py310_cu102/render_utils_cuda/build.ninja...
Building extension module render_utils_cuda...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
ninja: no work to do.
Loading extension module render_utils_cuda...
Using /home/x0s/.cache/torch_extensions/py310_cu102 as PyTorch extensions root...
Detected CUDA files, patching ldflags
Emitting ninja build file /home/x0s/.cache/torch_extensions/py310_cu102/total_variation_cuda/build.ninja...
Building extension module total_variation_cuda...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
ninja: no work to do.
Loading extension module total_variation_cuda...
Using /home/x0s/.cache/torch_extensions/py310_cu102 as PyTorch extensions root...
No modifications detected for re-loaded extension module render_utils_cuda, skipping build step...
Loading extension module render_utils_cuda...
Using /home/x0s/.cache/torch_extensions/py310_cu102 as PyTorch extensions root...
Detected CUDA files, patching ldflags
Emitting ninja build file /home/x0s/.cache/torch_extensions/py310_cu102/ub360_utils_cuda/build.ninja...
Building extension module ub360_utils_cuda...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
ninja: no work to do.
Loading extension module ub360_utils_cuda...
Loaded blender (400, 800, 800, 4) torch.Size([160, 4, 4]) [800, 800, 1111.1110311937682] ./data/nerf_synthetic/lego
Killed
x0s commented 2 years ago

Solved by extending RAM, but could be nice to know how to reduce memory requirements from the scrips

saurabhmishra608 commented 1 year ago

Solved by extending RAM, but could be nice to know how to reduce memory requirements from the scrips

How much ram is actually needed to run the scripts for inference and training?