NVlabs / neuralangelo

Official implementation of "Neuralangelo: High-Fidelity Neural Surface Reconstruction" (CVPR 2023)
https://research.nvidia.com/labs/dir/neuralangelo/
Other
4.31k stars 387 forks source link

Error occurred while training - preloading images #119

Closed Ryan-ZL-Lin closed 11 months ago

Ryan-ZL-Lin commented 11 months ago

Hi I can train Neuralangelo by using the toy-example project, however, I encountered an issue with another project where much more images are needed. I tried to adjust a few hyper parameters such as data.train.image_size, data.train.batch_size to lower their value but it didn't work. Does anyone know how to tackle this issue? Here is my log for your reference:

cudnn benchmark: True
cudnn deterministic: False
Setup trainer.
Using random seed 0
/home/ryan_lin/miniconda3/envs/neuralangelo/lib/python3.8/site-packages/tinycudann/modules.py:53: UserWarning: tinycudann was built for lower compute capability (86) than the system's (89). Performance may be suboptimal.
  warnings.warn(f"tinycudann was built for lower compute capability ({cc}) than the system's ({system_compute_capability}). Performance may be suboptimal.")
model parameter count: 99,705,900
Initialize model weights using type: none, gain: None
Using random seed 0
Allow TensorFloat32 operations on supported devices
preloading images (train):  65%|███████████████████████████████████████████████████████████████▉                                  | 2093/3208 [00:12<00:08, 131.98it/s]ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: -9) local_rank: 0 (pid: 112358) of binary: /home/ryan_lin/miniconda3/envs/neuralangelo/bin/python
Traceback (most recent call last):
  File "/home/ryan_lin/miniconda3/envs/neuralangelo/bin/torchrun", line 10, in <module>
    sys.exit(main())
  File "/home/ryan_lin/miniconda3/envs/neuralangelo/lib/python3.8/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 346, in wrapper
    return f(*args, **kwargs)
  File "/home/ryan_lin/miniconda3/envs/neuralangelo/lib/python3.8/site-packages/torch/distributed/run.py", line 794, in main
    run(args)
  File "/home/ryan_lin/miniconda3/envs/neuralangelo/lib/python3.8/site-packages/torch/distributed/run.py", line 785, in run
    elastic_launch(
  File "/home/ryan_lin/miniconda3/envs/neuralangelo/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 134, in __call__
    return launch_agent(self._config, self._entrypoint, list(args))
  File "/home/ryan_lin/miniconda3/envs/neuralangelo/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 250, in launch_agent
    raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
=======================================================
train.py FAILED
-------------------------------------------------------
Failures:
  <NO_OTHER_FAILURES>
-------------------------------------------------------
Root Cause (first observed failure):
[0]:
  time      : 2023-09-21_09:43:57
  host      : RyanLegionPro7i.
  rank      : 0 (local_rank: 0)
  exitcode  : -9 (pid: 112358)
  error_file: <N/A>
  traceback : Signal 9 (SIGKILL) received by PID 112358
chenhsuanlin commented 11 months ago

Hi @Ryan-ZL-Lin, it probably OOM'ed from preloading too many images into your RAM. You would probably have to set data.preload=False or preprocess your images and downsize them.

Ryan-ZL-Lin commented 11 months ago

Thanks, @chenhsuanlin It's working.

DrMemoryFish commented 10 months ago

Hi @Ryan-ZL-Lin, it probably OOM'ed from preloading too many images into your RAM. You would probably have to set data.preload=False or preprocess your images and downsize them.

how do we set data.preload=False or preprocess the images and downsize them.?.

I'm getting this error:

preloading images (train):  70%|█████████████████████████▏          | 241/344 [00:25<00:08, 11.52it/s]ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: -9) local_rank: 0 (pid: 188922) of binary: /home/drmemoryfish/miniconda3/envs/neuralangelo/bin/python
Traceback (most recent call last):
  File "/home/drmemoryfish/miniconda3/envs/neuralangelo/bin/torchrun", line 10, in <module>
    sys.exit(main())
  File "/home/drmemoryfish/miniconda3/envs/neuralangelo/lib/python3.8/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 346, in wrapper
    return f(*args, **kwargs)
  File "/home/drmemoryfish/miniconda3/envs/neuralangelo/lib/python3.8/site-packages/torch/distributed/run.py", line 794, in main
    run(args)
  File "/home/drmemoryfish/miniconda3/envs/neuralangelo/lib/python3.8/site-packages/torch/distributed/run.py", line 785, in run
    elastic_launch(
  File "/home/drmemoryfish/miniconda3/envs/neuralangelo/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 134, in __call__
    return launch_agent(self._config, self._entrypoint, list(args))
  File "/home/drmemoryfish/miniconda3/envs/neuralangelo/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 250, in launch_agent
    raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
=======================================================
train.py FAILED
-------------------------------------------------------
Failures:
  <NO_OTHER_FAILURES>
-------------------------------------------------------
Root Cause (first observed failure):
[0]:
  time      : 2023-10-18_02:21:05
  host      : DrMemoryFish.
  rank      : 0 (local_rank: 0)
  exitcode  : -9 (pid: 188922)
  error_file: <N/A>
  traceback : Signal 9 (SIGKILL) received by PID 188922
=======================================================
DrMemoryFish commented 10 months ago

Thanks, @chenhsuanlin It's working.

do you mind sharing exactly where I find this data.preload=False. thank you.