YuliangXiu / ECON

[CVPR'23, Highlight] ECON: Explicit Clothed humans Optimized via Normal integration
https://xiuyuliang.cn/econ
Other
1.06k stars 105 forks source link

Questions about training #109

Closed glorioushonor closed 8 months ago

glorioushonor commented 8 months ago

Hello, I have attempted to retrain ECON_IF and have a few questions to consult with you. 1.Approximately how much GPU memory is needed for training the model? I used two RTX 3090 GPUs for training, but even when I reduced the batch size to 4, it still showed an out-of-memory error. Only when I further reduced the batch size to 2 did the program start training, albeit barely. 2.During the process of successfully running the program, I noticed that having an incorrect version of pytorch_lightning can lead to numerous errors. Although I have resolved the issues on my own, I still suggest that you provide the version you used to facilitate future replication. *The following is the out-of-memory error message.

wandb: Currently logged in as: lin_jie. Use `wandb login --relogin` to force relogin
wandb: Tracking run with wandb version 0.16.0
wandb: Run data is saved locally in ./results/wandb/wandb/run-20231127_204625-bdmfznub
wandb: Run `wandb offline` to turn off syncing.
wandb: Syncing run IF-Geo-256-3dpeople-axyz-renderpeople-renderpeople_p27-humanalloy-thuman2
wandb: โญ๏ธ View project at https://wandb.ai/lin_jie/IF-Geo
wandb: ๐Ÿš€ View run at https://wandb.ai/lin_jie/IF-Geo/runs/bdmfznub
{'train': 3636, 'val': 756, 'test': 180}
GPU available: True (cuda), used: True
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
`Trainer(limit_train_batches=1.0)` was configured so 100% of the batches per epoch will be used..
`Trainer(limit_val_batches=1.0)` was configured so 100% of the batches will be used..
`Trainer(limit_test_batches=1.0)` was configured so 100% of the batches will be used..
{'train': 3636, 'val': 756, 'test': 180}
Initializing distributed: GLOBAL_RANK: 0, MEMBER: 1/2
Initializing distributed: GLOBAL_RANK: 1, MEMBER: 2/2
----------------------------------------------------------------------------------------------------
distributed_backend=nccl
All distributed processes registered. Starting with 2 processes
----------------------------------------------------------------------------------------------------

You are using a CUDA device ('NVIDIA GeForce RTX 3090') that has Tensor Cores. To properly utilize them, you should set `torch.set_float32_matmul_precision('medium' | 'high')` which will trade-off precision for performance. For more details, read https://pytorch.org/docs/stable/generated/torch.set_float32_matmul_precision.html#torch.set_float32_matmul_precision
Loading thuman2-train-0100: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 101/101 [02:16<00:00,  1.35s/it]
load from ./data/thuman2/train.txt
total: 104
Loading thuman2-train-0100: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 101/101 [02:17<00:00,  1.36s/it]
load from ./data/thuman2/train.txt
total: 104
Loading thuman2-val-0525: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 21/21 [00:29<00:00,  1.39s/it]
load from ./data/thuman2/val.txt
Loading thuman2-val-0525: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 21/21 [00:29<00:00,  1.43s/it]
load from ./data/thuman2/val.txt
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [2,3]
LOCAL_RANK: 1 - CUDA_VISIBLE_DEVICES: [2,3]
โ”โ”โ”โ”โ”ณโ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”ณโ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”ณโ”โ”โ”โ”โ”โ”โ”โ”โ”“
โ”ƒ   โ”ƒ Name        โ”ƒ Type          โ”ƒ Params โ”ƒ
โ”กโ”โ”โ”โ•‡โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ•‡โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ•‡โ”โ”โ”โ”โ”โ”โ”โ”โ”ฉ
โ”‚ 0 โ”‚ netG        โ”‚ IFGeoNet      โ”‚  3.2 M โ”‚
โ”‚ 1 โ”‚ reconEngine โ”‚ Seg3dLossless โ”‚      0 โ”‚
โ””โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
Trainable params: 3.2 M                                                                                                                                                       
Non-trainable params: 0                                                                                                                                                       
Total params: 3.2 M                                                                                                                                                           
Total estimated model params size (MB): 12                                                                                                                                    
Epoch 0/19 โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ” 0/523 0:00:00 โ€ข -:--:-- 0.00it/s loss: nan v_num: znub val/loss: 0.225 Exception in thread Thread-3:
Traceback (most recent call last):
  File "/media/vision/linjie/.conda/envs/ECON/lib/python3.8/threading.py", line 932, in _bootstrap_inner
    self.run()
  File "/media/vision/linjie/.conda/envs/ECON/lib/python3.8/threading.py", line 870, in run
    self._target(*self._args, **self._kwargs)
  File "/media/vision/linjie/.conda/envs/ECON/lib/python3.8/site-packages/torch/utils/data/_utils/pin_memory.py", line 49, in _pin_memory_loop
    do_one_step()
  File "/media/vision/linjie/.conda/envs/ECON/lib/python3.8/site-packages/torch/utils/data/_utils/pin_memory.py", line 26, in do_one_step
    r = in_queue.get(timeout=MP_STATUS_CHECK_INTERVAL)
  File "/media/vision/linjie/.conda/envs/ECON/lib/python3.8/multiprocessing/queues.py", line 116, in get
    return _ForkingPickler.loads(res)
  File "/media/vision/linjie/.conda/envs/ECON/lib/python3.8/site-packages/torch/multiprocessing/reductions.py", line 305, in rebuild_storage_fd
    fd = df.detach()
  File "/media/vision/linjie/.conda/envs/ECON/lib/python3.8/multiprocessing/resource_sharer.py", line 57, in detach
    with _resource_sharer.get_connection(self._id) as conn:
  File "/media/vision/linjie/.conda/envs/ECON/lib/python3.8/multiprocessing/resource_sharer.py", line 87, in get_connection
    c = Client(address, authkey=process.current_process().authkey)
  File "/media/vision/linjie/.conda/envs/ECON/lib/python3.8/multiprocessing/connection.py", line 508, in Client
    answer_challenge(c, authkey)
  File "/media/vision/linjie/.conda/envs/ECON/lib/python3.8/multiprocessing/connection.py", line 752, in answer_challenge
    message = connection.recv_bytes(256)         # reject large message
  File "/media/vision/linjie/.conda/envs/ECON/lib/python3.8/multiprocessing/connection.py", line 216, in recv_bytes
    buf = self._recv_bytes(maxlength)
  File "/media/vision/linjie/.conda/envs/ECON/lib/python3.8/multiprocessing/connection.py", line 414, in _recv_bytes
    buf = self._recv(4)
  File "/media/vision/linjie/.conda/envs/ECON/lib/python3.8/multiprocessing/connection.py", line 379, in _recv
    chunk = read(handle, remaining)
ConnectionResetError: [Errno 104] Connection reset by peer
Exception in thread Thread-3:
Traceback (most recent call last):
  File "/media/vision/linjie/.conda/envs/ECON/lib/python3.8/threading.py", line 932, in _bootstrap_inner
    self.run()
  File "/media/vision/linjie/.conda/envs/ECON/lib/python3.8/threading.py", line 870, in run
    self._target(*self._args, **self._kwargs)
  File "/media/vision/linjie/.conda/envs/ECON/lib/python3.8/site-packages/torch/utils/data/_utils/pin_memory.py", line 49, in _pin_memory_loop
    do_one_step()
  File "/media/vision/linjie/.conda/envs/ECON/lib/python3.8/site-packages/torch/utils/data/_utils/pin_memory.py", line 26, in do_one_step
    r = in_queue.get(timeout=MP_STATUS_CHECK_INTERVAL)
  File "/media/vision/linjie/.conda/envs/ECON/lib/python3.8/multiprocessing/queues.py", line 116, in get
    return _ForkingPickler.loads(res)
  File "/media/vision/linjie/.conda/envs/ECON/lib/python3.8/site-packages/torch/multiprocessing/reductions.py", line 305, in rebuild_storage_fd
    fd = df.detach()
  File "/media/vision/linjie/.conda/envs/ECON/lib/python3.8/multiprocessing/resource_sharer.py", line 57, in detach
    with _resource_sharer.get_connection(self._id) as conn:
  File "/media/vision/linjie/.conda/envs/ECON/lib/python3.8/multiprocessing/resource_sharer.py", line 87, in get_connection
    c = Client(address, authkey=process.current_process().authkey)
  File "/media/vision/linjie/.conda/envs/ECON/lib/python3.8/multiprocessing/connection.py", line 508, in Client
    answer_challenge(c, authkey)
  File "/media/vision/linjie/.conda/envs/ECON/lib/python3.8/multiprocessing/connection.py", line 752, in answer_challenge
    message = connection.recv_bytes(256)         # reject large message
  File "/media/vision/linjie/.conda/envs/ECON/lib/python3.8/multiprocessing/connection.py", line 216, in recv_bytes
    buf = self._recv_bytes(maxlength)
  File "/media/vision/linjie/.conda/envs/ECON/lib/python3.8/multiprocessing/connection.py", line 414, in _recv_bytes
    buf = self._recv(4)
  File "/media/vision/linjie/.conda/envs/ECON/lib/python3.8/multiprocessing/connection.py", line 379, in _recv
    chunk = read(handle, remaining)
ConnectionResetError: [Errno 104] Connection reset by peer
Epoch 0/19 โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ” 0/523 0:00:00 โ€ข -:--:-- 0.00it/s loss: nan v_num: znub val/loss: 0.225 Traceback (most recent call last):
  File "/media/vision/linjie/.conda/envs/ECON/lib/python3.8/runpy.py", line 194, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/media/vision/linjie/.conda/envs/ECON/lib/python3.8/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/media/vision/linjie/ECON-master/apps/train-IF-geo.py", line 125, in <module>
    trainer.fit(model=model, datamodule=datamodule)
  File "/media/vision/linjie/.conda/envs/ECON/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 608, in fit
    call._call_and_handle_interrupt(
  File "/media/vision/linjie/.conda/envs/ECON/lib/python3.8/site-packages/pytorch_lightning/trainer/call.py", line 36, in _call_and_handle_interrupt
    return trainer.strategy.launcher.launch(trainer_fn, *args, trainer=trainer, **kwargs)
  File "/media/vision/linjie/.conda/envs/ECON/lib/python3.8/site-packages/pytorch_lightning/strategies/launchers/multiprocessing.py", line 113, in launch
    mp.start_processes(
  File "/media/vision/linjie/.conda/envs/ECON/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 198, in start_processes
    while not context.join():
  File "/media/vision/linjie/.conda/envs/ECON/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 160, in join
    raise ProcessRaisedException(msg, error_index, failed_process.pid)
torch.multiprocessing.spawn.ProcessRaisedException: 

-- Process 1 terminated with the following error:
Traceback (most recent call last):
  File "/media/vision/linjie/.conda/envs/ECON/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 69, in _wrap
    fn(i, *args)
  File "/media/vision/linjie/.conda/envs/ECON/lib/python3.8/site-packages/pytorch_lightning/strategies/launchers/multiprocessing.py", line 139, in _wrapping_function
    results = function(*args, **kwargs)
  File "/media/vision/linjie/.conda/envs/ECON/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 650, in _fit_impl
    self._run(model, ckpt_path=self.ckpt_path)
  File "/media/vision/linjie/.conda/envs/ECON/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1112, in _run
    results = self._run_stage()
  File "/media/vision/linjie/.conda/envs/ECON/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1191, in _run_stage
    self._run_train()
  File "/media/vision/linjie/.conda/envs/ECON/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1214, in _run_train
    self.fit_loop.run()
  File "/media/vision/linjie/.conda/envs/ECON/lib/python3.8/site-packages/pytorch_lightning/loops/loop.py", line 199, in run
    self.advance(*args, **kwargs)
  File "/media/vision/linjie/.conda/envs/ECON/lib/python3.8/site-packages/pytorch_lightning/loops/fit_loop.py", line 267, in advance
    self._outputs = self.epoch_loop.run(self._data_fetcher)
  File "/media/vision/linjie/.conda/envs/ECON/lib/python3.8/site-packages/pytorch_lightning/loops/loop.py", line 199, in run
    self.advance(*args, **kwargs)
  File "/media/vision/linjie/.conda/envs/ECON/lib/python3.8/site-packages/pytorch_lightning/loops/epoch/training_epoch_loop.py", line 213, in advance
    batch_output = self.batch_loop.run(kwargs)
  File "/media/vision/linjie/.conda/envs/ECON/lib/python3.8/site-packages/pytorch_lightning/loops/loop.py", line 199, in run
    self.advance(*args, **kwargs)
  File "/media/vision/linjie/.conda/envs/ECON/lib/python3.8/site-packages/pytorch_lightning/loops/batch/training_batch_loop.py", line 88, in advance
    outputs = self.optimizer_loop.run(optimizers, kwargs)
  File "/media/vision/linjie/.conda/envs/ECON/lib/python3.8/site-packages/pytorch_lightning/loops/loop.py", line 199, in run
    self.advance(*args, **kwargs)
  File "/media/vision/linjie/.conda/envs/ECON/lib/python3.8/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 202, in advance
    result = self._run_optimization(kwargs, self._optimizers[self.optim_progress.optimizer_position])
  File "/media/vision/linjie/.conda/envs/ECON/lib/python3.8/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 249, in _run_optimization
    self._optimizer_step(optimizer, opt_idx, kwargs.get("batch_idx", 0), closure)
  File "/media/vision/linjie/.conda/envs/ECON/lib/python3.8/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 370, in _optimizer_step
    self.trainer._call_lightning_module_hook(
  File "/media/vision/linjie/.conda/envs/ECON/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1356, in _call_lightning_module_hook
    output = fn(*args, **kwargs)
  File "/media/vision/linjie/.conda/envs/ECON/lib/python3.8/site-packages/pytorch_lightning/core/module.py", line 1754, in optimizer_step
    optimizer.step(closure=optimizer_closure)
  File "/media/vision/linjie/.conda/envs/ECON/lib/python3.8/site-packages/pytorch_lightning/core/optimizer.py", line 169, in step
    step_output = self._strategy.optimizer_step(self._optimizer, self._optimizer_idx, closure, **kwargs)
  File "/media/vision/linjie/.conda/envs/ECON/lib/python3.8/site-packages/pytorch_lightning/strategies/strategy.py", line 234, in optimizer_step
    return self.precision_plugin.optimizer_step(
  File "/media/vision/linjie/.conda/envs/ECON/lib/python3.8/site-packages/pytorch_lightning/plugins/precision/precision_plugin.py", line 119, in optimizer_step
    return optimizer.step(closure=closure, **kwargs)
  File "/media/vision/linjie/.conda/envs/ECON/lib/python3.8/site-packages/torch/optim/lr_scheduler.py", line 68, in wrapper
    return wrapped(*args, **kwargs)
  File "/media/vision/linjie/.conda/envs/ECON/lib/python3.8/site-packages/torch/optim/optimizer.py", line 140, in wrapper
    out = func(*args, **kwargs)
  File "/media/vision/linjie/.conda/envs/ECON/lib/python3.8/site-packages/torch/optim/optimizer.py", line 23, in _use_grad
    ret = func(self, *args, **kwargs)
  File "/media/vision/linjie/.conda/envs/ECON/lib/python3.8/site-packages/torch/optim/rmsprop.py", line 109, in step
    loss = closure()
  File "/media/vision/linjie/.conda/envs/ECON/lib/python3.8/site-packages/pytorch_lightning/plugins/precision/precision_plugin.py", line 105, in _wrap_closure
    closure_result = closure()
  File "/media/vision/linjie/.conda/envs/ECON/lib/python3.8/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 149, in __call__
    self._result = self.closure(*args, **kwargs)
  File "/media/vision/linjie/.conda/envs/ECON/lib/python3.8/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 135, in closure
    step_output = self._step_fn()
  File "/media/vision/linjie/.conda/envs/ECON/lib/python3.8/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 419, in _training_step
    training_step_output = self.trainer._call_strategy_hook("training_step", *kwargs.values())
  File "/media/vision/linjie/.conda/envs/ECON/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1494, in _call_strategy_hook
    output = fn(*args, **kwargs)
  File "/media/vision/linjie/.conda/envs/ECON/lib/python3.8/site-packages/pytorch_lightning/strategies/ddp_spawn.py", line 280, in training_step
    return self.model(*args, **kwargs)
  File "/media/vision/linjie/.conda/envs/ECON/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl
    return forward_call(*input, **kwargs)
  File "/media/vision/linjie/.conda/envs/ECON/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 1040, in forward
    output = self._run_ddp_forward(*inputs, **kwargs)
  File "/media/vision/linjie/.conda/envs/ECON/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 1000, in _run_ddp_forward
    return module_to_run(*inputs[0], **kwargs[0])
  File "/media/vision/linjie/.conda/envs/ECON/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl
    return forward_call(*input, **kwargs)
  File "/media/vision/linjie/.conda/envs/ECON/lib/python3.8/site-packages/pytorch_lightning/overrides/base.py", line 98, in forward
    output = self._forward_module.training_step(*inputs, **kwargs)
  File "/media/vision/linjie/ECON-master/apps/IFGeo.py", line 117, in training_step
    preds_G = self.netG(batch)
  File "/media/vision/linjie/.conda/envs/ECON/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl
    return forward_call(*input, **kwargs)
  File "/media/vision/linjie/ECON-master/lib/net/IFGeoNet.py", line 141, in forward
    net = self.actvn(self.conv_0(net))
  File "/media/vision/linjie/.conda/envs/ECON/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl
    return forward_call(*input, **kwargs)
  File "/media/vision/linjie/.conda/envs/ECON/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 613, in forward
    return self._conv_forward(input, self.weight, self.bias)
  File "/media/vision/linjie/.conda/envs/ECON/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 597, in _conv_forward
    return F.conv3d(
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 1024.00 MiB (GPU 1; 23.69 GiB total capacity; 20.46 GiB already allocated; 828.44 MiB free; 21.24 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
YuliangXiu commented 8 months ago

In my conda env, PyTorch-Lightning==2.1.0

As for the GPU, I use two Quadro RTX 5000 with 16x2 GB memory.

glorioushonor commented 8 months ago

In my conda env, PyTorch-Lightning==2.1.0

As for the GPU, I use two Quadro RTX 5000 with 16x2 GB memory.

That's indeed strange. The combined memory of two RTX 3090 GPUs is 2 * 24GB, and yet I could only set the batch size to 2. Did you perhaps optimize the code further in subsequent updates?

YuliangXiu commented 8 months ago

The training was conducted on either my local workstation with RTX 5000 or MPI cluster with A100. The 4-batch should be set for cluster training, and smaller for local workstation.

But anyway, the batch size does not affect the final performance a lot, and actually, you can train the IF-Net++ with smaller resolution (256 โ†’ 128), this won't significantly affect the final performance. And the IF-Nets++ trained with 128 resolution could be directly applied into volumetric shapes with 256^3, as it's fully convolutional.