Open mwalczyk opened 4 years ago
Hmm, I never saw this error before. Did you try using one GPU first? For instance, by setting ``CUDA_VISIBLE_DEVICES=0" ?
Hey thanks for getting back to me so quickly. I can try that!
For what it's worth, I was able to get it to work by adding the following lines to fairnr/data/data_utils.py
in the function recover_image()
:
if torch.is_tensor(min_val):
min_val = min_val.float().to('cpu')
if torch.is_tensor(max_val):
max_val = min_val.float().to('cpu')
But I have no idea whether that is reasonable. I printed out the images
variable in fairnr_model.py
(function visualize()
) and noticed that in that dictionary, there were two entries that weren't torch tensors, which I think was causing issues:
'render_normal/0_0:HWC': {
'img': tensor([[0., 0., 0.],
[0., 0., 0.],
[0., 0., 0.],
...,
[0., 0., 0.],
[0., 0., 0.],
[0., 0., 0.]], device='cuda:0'),
'min_val': -1, <---- This entry
'max_val': 1 <---- This entry
},
The training proceeds and after reaching the requisite 500 iterations, enters another 50 iterations on the "valid" subset, and fails after about 12 of those iterations with a CUDA OOM error (pasted below for reference). Is there any guidelines on the minimum amount of VRAM required in order to run the training? Across two GPUs, I believe I have 16 GBs free. Alternatively, are there other ways to lower the amount of VRAM usage during training?
In the meantime, I will try your suggestion of setting the CUDA_VISIBLE_DEVICES env variable. Thanks!
Traceback (most recent call last):
File "train.py", line 20, in <module>
cli_main()
File "/code/nsvf/fairnr_cli/train.py", line 353, in cli_main
torch.multiprocessing.spawn(
File "/code/nsvf/env/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 199, in spawn
return start_processes(fn, args, nprocs, join, daemon, start_method='spawn')
File "/code/nsvf/env/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 157, in start_processes
while not context.join():
File "/code/nsvf/env/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 118, in join
raise Exception(msg)
Exception:
-- Process 1 terminated with the following error:
Traceback (most recent call last):
File "/code/nsvf/env/lib/python3.8/site-packages/fairseq/trainer.py", line 615, in valid_step
_loss, sample_size, logging_output = self.task.valid_step(
File "/code/nsvf/fairnr/tasks/neural_rendering.py", line 303, in valid_step
loss, sample_size, logging_output = super().valid_step(sample, model, criterion)
File "/code/nsvf/env/lib/python3.8/site-packages/fairseq/tasks/fairseq_task.py", line 361, in valid_step
loss, sample_size, logging_output = criterion(model, sample)
File "/code/nsvf/env/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/code/nsvf/env/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 619, in forward
output = self.module(*inputs[0], **kwargs[0])
File "/code/nsvf/env/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/code/nsvf/fairnr/criterions/rendering_loss.py", line 42, in forward
net_output = model(**sample)
File "/code/nsvf/env/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/code/nsvf/env/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 619, in forward
output = self.module(*inputs[0], **kwargs[0])
File "/code/nsvf/env/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/code/nsvf/fairnr/models/fairnr_model.py", line 77, in forward
results = self._forward(ray_start, ray_dir, **kwargs)
File "/code/nsvf/fairnr/models/nsvf.py", line 78, in _forward
samples = self.encoder.ray_sample(intersection_outputs)
File "/code/nsvf/fairnr/modules/encoder.py", line 354, in ray_sample
sampled_idx, sampled_depth, sampled_dists = uniform_ray_sampling(
File "/code/nsvf/fairnr/clib/__init__.py", line 213, in forward
max_len = sampled_idx.ne(-1).sum(-1).max()
RuntimeError: CUDA out of memory. Tried to allocate 1.02 GiB (GPU 1; 7.92 GiB total capacity; 3.34 GiB already allocated; 638.38 MiB free; 6.34 GiB reserved in total by PyTorch)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/code/nsvf/env/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 19, in _wrap
fn(i, *args)
File "/code/nsvf/fairnr_cli/train.py", line 338, in distributed_main
main(args, init_distributed=True)
File "/code/nsvf/fairnr_cli/train.py", line 104, in main
should_end_training = train(args, trainer, task, epoch_itr)
File "/media/lightbox/Extra/anaconda/lib/python3.8/contextlib.py", line 75, in inner
return func(*args, **kwds)
File "/code/nsvf/fairnr_cli/train.py", line 204, in train
valid_losses = validate_and_save(args, trainer, task, epoch_itr, valid_subsets)
File "/code/nsvf/fairnr_cli/train.py", line 245, in validate_and_save
valid_losses = validate(args, trainer, task, epoch_itr, valid_subsets)
File "/code/nsvf/fairnr_cli/train.py", line 302, in validate
trainer.valid_step(sample)
File "/media/lightbox/Extra/anaconda/lib/python3.8/contextlib.py", line 75, in inner
return func(*args, **kwds)
File "/code/nsvf/env/lib/python3.8/site-packages/fairseq/trainer.py", line 630, in valid_step
return self.valid_step(sample, raise_oom=True)
File "/media/lightbox/Extra/anaconda/lib/python3.8/contextlib.py", line 75, in inner
return func(*args, **kwds)
File "/code/nsvf/env/lib/python3.8/site-packages/fairseq/trainer.py", line 631, in valid_step
raise e
File "/code/nsvf/env/lib/python3.8/site-packages/fairseq/trainer.py", line 615, in valid_step
_loss, sample_size, logging_output = self.task.valid_step(
File "/code/nsvf/fairnr/tasks/neural_rendering.py", line 303, in valid_step
loss, sample_size, logging_output = super().valid_step(sample, model, criterion)
File "/code/nsvf/env/lib/python3.8/site-packages/fairseq/tasks/fairseq_task.py", line 361, in valid_step
loss, sample_size, logging_output = criterion(model, sample)
File "/code/nsvf/env/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/code/nsvf/env/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 619, in forward
output = self.module(*inputs[0], **kwargs[0])
File "/code/nsvf/env/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/code/nsvf/fairnr/criterions/rendering_loss.py", line 42, in forward
net_output = model(**sample)
File "/code/nsvf/env/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/code/nsvf/env/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 619, in forward
output = self.module(*inputs[0], **kwargs[0])
File "/code/nsvf/env/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/code/nsvf/fairnr/models/fairnr_model.py", line 77, in forward
results = self._forward(ray_start, ray_dir, **kwargs)
File "/code/nsvf/fairnr/models/nsvf.py", line 78, in _forward
samples = self.encoder.ray_sample(intersection_outputs)
File "/code/nsvf/fairnr/modules/encoder.py", line 354, in ray_sample
sampled_idx, sampled_depth, sampled_dists = uniform_ray_sampling(
File "/code/nsvf/fairnr/clib/__init__.py", line 200, in forward
sampled_idx, sampled_depth, sampled_dists = _ext.uniform_ray_sampling(
RuntimeError: CUDA out of memory. Tried to allocate 522.00 MiB (GPU 1; 7.92 GiB total capacity; 5.06 GiB already allocated; 120.38 MiB free; 6.84 GiB reserved in total by PyTorch)
Have the same RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! Ubuntu 20.04 with GeForce RTX 3090 (24GB memory) CUDA Version: 11.1. Thanks.
There are some tensors initialized and not put on GPUs.
Hi,
I'm trying to run the
train_wineholder.sh
script on my machine. It works fine for the first 500 iterations, but immediately after the 500th iteration, it pauses, eventually throwing the following error related to tensors existing on different devices:Start of training:
Then at iter 500:
The only change I've made to the training script is, reducing
--view-per-batch
to 1. Do you have any idea what the issue might be?I'm running this on Ubuntu 20.04 with two GTX GeForce 1080 GPUs, CUDA version 10.1. Let me know if I can provide any further info at this time! Thanks so much!