FutureXiang / soda

Unofficial implementation of "SODA: Bottleneck Diffusion Models for Representation Learning"
64 stars 2 forks source link

Issue with running the code on my machine #6

Open Shivam101s opened 2 months ago

Shivam101s commented 2 months ago

I have 2 GPUs (24GB each) and I ran the following command: python -m torch.distributed.launch --nproc_per_node=2 train.py --config config/cifar10.yaml --use_amp

And I am getting this error:

[rank1]: Traceback (most recent call last):
[rank1]:   File "/home/shivam/soda/train.py", line 175, in <module>
[rank1]:     train(opt)
[rank1]:   File "/home/shivam/soda/train.py", line 39, in train
[rank1]:     soda = SODA(encoder=Network(**opt.encoder),
[rank1]:   File "/home/shivam/soda/model/SODA.py", line 102, in __init__
[rank1]:     self.encoder = encoder.to(device)
[rank1]:   File "/home/shivam/anaconda3/envs/sodaa/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1173, in to
[rank1]:     return self._apply(convert)
[rank1]:   File "/home/shivam/anaconda3/envs/sodaa/lib/python3.9/site-packages/torch/nn/modules/module.py", line 779, in _apply
[rank1]:     module._apply(fn)
[rank1]:   File "/home/shivam/anaconda3/envs/sodaa/lib/python3.9/site-packages/torch/nn/modules/module.py", line 779, in _apply
[rank1]:     module._apply(fn)
[rank1]:   File "/home/shivam/anaconda3/envs/sodaa/lib/python3.9/site-packages/torch/nn/modules/module.py", line 804, in _apply
[rank1]:     param_applied = fn(param)
[rank1]:   File "/home/shivam/anaconda3/envs/sodaa/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1159, in convert
[rank1]:     return t.to(
[rank1]: RuntimeError: CUDA error: device kernel image is invalid
[rank1]: Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.

[rank0]: Traceback (most recent call last):
[rank0]:   File "/home/shivam/soda/train.py", line 175, in <module>
[rank0]:     train(opt)
[rank0]:   File "/home/shivam/soda/train.py", line 39, in train
[rank0]:     soda = SODA(encoder=Network(**opt.encoder),
[rank0]:   File "/home/shivam/soda/model/SODA.py", line 102, in __init__
[rank0]:     self.encoder = encoder.to(device)
[rank0]:   File "/home/shivam/anaconda3/envs/sodaa/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1173, in to
[rank0]:     return self._apply(convert)
[rank0]:   File "/home/shivam/anaconda3/envs/sodaa/lib/python3.9/site-packages/torch/nn/modules/module.py", line 779, in _apply
[rank0]:     module._apply(fn)
[rank0]:   File "/home/shivam/anaconda3/envs/sodaa/lib/python3.9/site-packages/torch/nn/modules/module.py", line 779, in _apply
[rank0]:     module._apply(fn)
[rank0]:   File "/home/shivam/anaconda3/envs/sodaa/lib/python3.9/site-packages/torch/nn/modules/module.py", line 804, in _apply
[rank0]:     param_applied = fn(param)
[rank0]:   File "/home/shivam/anaconda3/envs/sodaa/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1159, in convert
[rank0]:     return t.to(
[rank0]: RuntimeError: CUDA error: device kernel image is invalid
[rank0]: Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.

E0429 19:47:43.021188 140093376303488 torch/distributed/elastic/multiprocessing/api.py:826] failed (exitcode: 1) local_rank: 0 (pid: 2003977) of binary: /home/shivam/anaconda3/envs/sodaa/bin/python
Traceback (most recent call last):
  File "/home/shivam/anaconda3/envs/sodaa/lib/python3.9/runpy.py", line 197, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/home/shivam/anaconda3/envs/sodaa/lib/python3.9/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/home/shivam/anaconda3/envs/sodaa/lib/python3.9/site-packages/torch/distributed/launch.py", line 198, in <module>
    main()
  File "/home/shivam/anaconda3/envs/sodaa/lib/python3.9/site-packages/torch/distributed/launch.py", line 194, in main
    launch(args)
  File "/home/shivam/anaconda3/envs/sodaa/lib/python3.9/site-packages/torch/distributed/launch.py", line 179, in launch
    run(args)
  File "/home/shivam/anaconda3/envs/sodaa/lib/python3.9/site-packages/torch/distributed/run.py", line 870, in run
    elastic_launch(
  File "/home/shivam/anaconda3/envs/sodaa/lib/python3.9/site-packages/torch/distributed/launcher/api.py", line 132, in __call__
    return launch_agent(self._config, self._entrypoint, list(args))
  File "/home/shivam/anaconda3/envs/sodaa/lib/python3.9/site-packages/torch/distributed/launcher/api.py", line 263, in launch_agent
    raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError: 
============================================================
train.py FAILED
------------------------------------------------------------
Failures:
[1]:
  time      : 2024-04-29_19:47:43
  host      : cyber
  rank      : 1 (local_rank: 1)
  exitcode  : 1 (pid: 2003978)
  error_file: <N/A>
  traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
  time      : 2024-04-29_19:47:43
  host      : cyber
  rank      : 0 (local_rank: 0)
  exitcode  : 1 (pid: 2003977)
  error_file: <N/A>
  traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
============================================================
iyc2 commented 1 month ago

I have encountered the same problem. Have you resolved it?