facebookresearch / NSVF

Open source code for the paper of Neural Sparse Voxel Fields.
MIT License
805 stars 92 forks source link

stuck with arg distributed-no-spawn, strange OUT OF MEMORY message without that #48

Closed chuchong closed 3 years ago

chuchong commented 3 years ago

Describe the bug when train it with arg --distributed-no-spawn , the program gets stuck. and without arg --distributed-no-spawn, the program produces strange "OUT OF MEMORY" error, since there's free GPU space. To Reproduce append line '--distributed-no-spawn' in train_wineholder.sh like the followed

# just for debugging
DATA="Wineholder"
RES="800x800"
ARCH="nsvf_base"
SUFFIX="v1"
DATASET=/xxx/NSVF/data/Synthetic_NSVF/${DATA}
SAVE=/xxx/NSVF/$DATA
MODEL=$ARCH$SUFFIX
mkdir -p $SAVE/$MODEL
CUDA_VISIBLE_DEVICES="4,7"

# start training locally
python train.py ${DATASET} \
    --user-dir fairnr \
    --task single_object_rendering \
    --train-views "0..100" \
    --view-resolution $RES \
    --max-sentences 1 \
    --view-per-batch 2 \
    --pixel-per-view 2048 \
    --no-preload \
    --sampling-on-mask 1.0 --no-sampling-at-reader \
    --valid-view-resolution $RES \
    --valid-views "100..200" \
    --valid-view-per-batch 1 \
    --transparent-background "1.0,1.0,1.0" \
    --background-stop-gradient \
    --arch $ARCH \
    --initial-boundingbox ${DATASET}/bbox.txt \
    --raymarching-stepsize-ratio 0.125 \
    --use-octree \
    --discrete-regularization \
    --color-weight 128.0 \
    --alpha-weight 1.0 \
    --optimizer "adam" \
    --adam-betas "(0.9, 0.999)" \
    --lr-scheduler "polynomial_decay" \
    --total-num-update 150000 \
    --lr 0.001 \
    --clip-norm 0.0 \
    --criterion "srn_loss" \
    --seed 2 \
    --save-interval-updates 500 --max-update 150000 \
    --virtual-epoch-steps 5000 --save-interval 1 \
    --half-voxel-size-at  "5000,25000,75000" \
    --reduce-step-size-at "5000,25000,75000" \
    --pruning-every-steps 2500 \
    --keep-interval-updates 5 \
    --log-format simple --log-interval 1 \
    --tensorboard-logdir ${SAVE}/tensorboard/${MODEL} \
    --save-dir ${SAVE}/${MODEL} \
    --device-id 4 \
    --distributed-no-spawn

and when running it , the program get stuck.

Without --distributed-no-spawn, it will log like this

2021-03-23 00:46:08 | INFO | fairseq.distributed_utils | distributed init (rank 7): tcp://localhost:14705
2021-03-23 00:46:08 | INFO | fairseq.distributed_utils | distributed init (rank 4): tcp://localhost:14705
2021-03-23 00:46:08 | INFO | fairseq.distributed_utils | distributed init (rank 8): tcp://localhost:14705
2021-03-23 00:46:08 | INFO | fairseq.distributed_utils | distributed init (rank 3): tcp://localhost:14705
2021-03-23 00:46:08 | INFO | fairseq.distributed_utils | distributed init (rank 2): tcp://localhost:14705
2021-03-23 00:46:08 | INFO | fairseq.distributed_utils | distributed init (rank 5): tcp://localhost:14705
2021-03-23 00:46:08 | INFO | fairseq.distributed_utils | distributed init (rank 1): tcp://localhost:14705
2021-03-23 00:46:08 | INFO | fairseq.distributed_utils | distributed init (rank 0): tcp://localhost:14705
2021-03-23 00:46:08 | INFO | fairseq.distributed_utils | distributed init (rank 6): tcp://localhost:14705
2021-03-23 00:46:08 | INFO | fairseq.distributed_utils | initialized host ubuntu as rank 6
2021-03-23 00:46:08 | INFO | fairseq.distributed_utils | distributed init (rank 9): tcp://localhost:14705
2021-03-23 00:46:08 | INFO | fairseq.distributed_utils | initialized host ubuntu as rank 9
2021-03-23 00:46:09 | INFO | fairseq.distributed_utils | initialized host ubuntu as rank 7
2021-03-23 00:46:09 | INFO | fairseq.distributed_utils | initialized host ubuntu as rank 4
2021-03-23 00:46:09 | INFO | fairseq.distributed_utils | initialized host ubuntu as rank 8
2021-03-23 00:46:09 | INFO | fairseq.distributed_utils | initialized host ubuntu as rank 3
2021-03-23 00:46:09 | INFO | fairseq.distributed_utils | initialized host ubuntu as rank 2
2021-03-23 00:46:09 | INFO | fairseq.distributed_utils | initialized host ubuntu as rank 5
2021-03-23 00:46:09 | INFO | fairseq.distributed_utils | initialized host ubuntu as rank 1
2021-03-23 00:46:09 | INFO | fairseq.distributed_utils | initialized host ubuntu as rank 0
Traceback (most recent call last):
  File "train.py", line 20, in <module>
    cli_main()
  File "/home/lsy/NSVF/fairnr_cli/train.py", line 356, in cli_main
    nprocs=torch.cuda.device_count(),
  File "/home/lsy/anaconda3/envs/NSVF/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 200, in spawn
    return start_processes(fn, args, nprocs, join, daemon, start_method='spawn')
  File "/home/lsy/anaconda3/envs/NSVF/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 158, in start_processes
    while not context.join():
  File "/home/lsy/anaconda3/envs/NSVF/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 119, in join
    raise Exception(msg)
Exception: 

-- Process 1 terminated with the following error:
Traceback (most recent call last):
  File "/home/lsy/anaconda3/envs/NSVF/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 20, in _wrap
    fn(i, *args)
  File "/home/lsy/NSVF/fairnr_cli/train.py", line 338, in distributed_main
    main(args, init_distributed=True)
  File "/home/lsy/NSVF/fairnr_cli/train.py", line 50, in main
    args.distributed_rank = distributed_utils.distributed_init(args)
  File "/home/lsy/NSVF/3rd/fairseq-stable/fairseq/distributed_utils.py", line 107, in distributed_init
    dist.all_reduce(torch.zeros(1).cuda())
RuntimeError: CUDA error: out of memory

it's strange since it has no message like Tried to allocate 2.0 GiB . Moreover, nvidis-smi shows theres free space in GPU 4 and 7.

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 430.26       Driver Version: 430.26       CUDA Version: 10.2     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  TITAN Xp            Off  | 00000000:04:00.0 Off |                  N/A |
| 26%   45C    P2    79W / 250W |   8119MiB / 12196MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
|   1  TITAN Xp            Off  | 00000000:05:00.0 Off |                  N/A |
| 30%   50C    P2    72W / 250W |   8119MiB / 12196MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
|   2  TITAN Xp            Off  | 00000000:06:00.0 Off |                  N/A |
| 28%   47C    P2    73W / 250W |   8103MiB / 12196MiB |      3%      Default |
+-------------------------------+----------------------+----------------------+
|   3  TITAN Xp            Off  | 00000000:07:00.0 Off |                  N/A |
| 27%   47C    P2    78W / 250W |   8135MiB / 12196MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
|   4  TITAN Xp            Off  | 00000000:08:00.0 Off |                  N/A |
| 23%   28C    P8     8W / 250W |     10MiB / 12196MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
|   5  TITAN Xp            Off  | 00000000:0B:00.0 Off |                  N/A |
| 32%   53C    P2   118W / 250W |   8334MiB / 12196MiB |     83%      Default |
+-------------------------------+----------------------+----------------------+
|   6  TITAN Xp            Off  | 00000000:0C:00.0 Off |                  N/A |
| 36%   60C    P2   146W / 250W |   8846MiB / 12196MiB |     81%      Default |
+-------------------------------+----------------------+----------------------+
|   7  TITAN Xp            Off  | 00000000:0D:00.0 Off |                  N/A |
| 23%   27C    P8     8W / 250W |     10MiB / 12196MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
|   8  TITAN Xp            Off  | 00000000:0E:00.0 Off |                  N/A |
| 39%   63C    P2   194W / 250W |  12080MiB / 12196MiB |     49%      Default |
+-------------------------------+----------------------+----------------------+
|   9  TITAN Xp            Off  | 00000000:0F:00.0 Off |                  N/A |
| 29%   49C    P2    77W / 250W |   8105MiB / 12196MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+

Desktop (please complete the following information):

THANKS!