pytorch / pytorch

Tensors and Dynamic neural networks in Python with strong GPU acceleration
https://pytorch.org
Other
82.89k stars 22.35k forks source link

CUDA out of memory still exist after using FSDP #123070

Open TuyetHan opened 6 months ago

TuyetHan commented 6 months ago

🐛 Describe the bug

I want to train a model on HPC using SLURM and accelerate to config FSDP. However, no matter how I change the configuration, It seems not to have much effect on CUDA memory usage (It doesn't save CUDA memory).

I've tried changing ShardingStrategy, number of Nodes/GPU/Process, MixedPrecision, Wrap policy, enabling/disabling CPUOffload, CPURam Efficiency Loading, .... but so far still fail on the same line of code and requires the same memory 29.07G. The batch size is already 1 and when I reduce input size, it returns Segmentation Fault (don't know why).

Below is errors:

Detected kernel version 4.18.0, which is below the recommended minimum of 5.5.0; this can cause the process to hang. It is recommended to upgrade the kernel to the minimum version or higher.

File "/project/p_trancal/CamLidCalib_Trans/Models/Encoder.py", line 45, in forward
    atten_out, atten_out_para = self.atten(x,x,x, attn_mask = attn_mask)
  File "/project/p_trancal/trsclbjob/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/project/p_trancal/trsclbjob/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
  File "/project/p_trancal/trsclbjob/lib/python3.10/site-packages/torch/nn/modules/activation.py", line 1126, in forward
    attn_mask = F._canonical_mask(
  File "/project/p_trancal/trsclbjob/lib/python3.10/site-packages/torch/nn/functional.py", line 5115, in _canonical_mask
    torch.zeros_like(mask, dtype=target_type)
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 29.07 GiB. GPU 3 has a total capacity of 39.43 GiB of which 25.15 GiB is free. Including non-PyTorch memory, this process has 14.27 GiB memory in use. Of the allocated memory 11.74 GiB is allocated by PyTorch, and 932.02 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
[2024-04-01 05:16:50,878] torch.distributed.elastic.multiprocessing.api: [ERROR] failed (exitcode: 1) local_rank: 0 (pid: 149171) of binary: /project/p_trancal/trsclbjob/bin/python
Traceback (most recent call last):
  File "/project/p_trancal/trsclbjob/bin/accelerate", line 8, in <module>
[2024-04-01 05:16:50,883] torch.distributed.elastic.multiprocessing.api: [ERROR] failed (exitcode: 1) local_rank: 0 (pid: 620204) of binary: /project/p_trancal/trsclbjob/bin/python
    sys.exit(main())
  File "/project/p_trancal/trsclbjob/lib/python3.10/site-packages/accelerate/commands/accelerate_cli.py", line 46, in main
    args.func(args)
  File "/project/p_trancal/trsclbjob/lib/python3.10/site-packages/accelerate/commands/launch.py", line 1044, in launch_command
    multi_gpu_launcher(args)
  File "/project/p_trancal/trsclbjob/lib/python3.10/site-packages/accelerate/commands/launch.py", line 702, in multi_gpu_launcher
    distrib_run.run(args)
  File "/project/p_trancal/trsclbjob/lib/python3.10/site-packages/torch/distributed/run.py", line 803, in run
    elastic_launch(
  File "/project/p_trancal/trsclbjob/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 135, in __call__
    return launch_agent(self._config, self._entrypoint, list(args))
  File "/project/p_trancal/trsclbjob/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 268, in launch_agent
    raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError: 
============================================================
/project/p_trancal/CamLidCalib_Trans/train.py FAILED
------------------------------------------------------------
Failures:
[1]:
  time      : 2024-04-01_05:16:50
  host      : cn04.head.komondor.hpc.einfra.hu
  rank      : 13 (local_rank: 1)
  exitcode  : 1 (pid: 149172)
  error_file: <N/A>
  traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
[2]:
  time      : 2024-04-01_05:16:50
  host      : cn04.head.komondor.hpc.einfra.hu
  rank      : 14 (local_rank: 2)
  exitcode  : 1 (pid: 149174)
  error_file: <N/A>
  traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
[3]:
  time      : 2024-04-01_05:16:50
  host      : cn04.head.komondor.hpc.einfra.hu
  rank      : 15 (local_rank: 3)
  exitcode  : 1 (pid: 149175)
  error_file: <N/A>
  traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
  time      : 2024-04-01_05:16:50
  host      : cn04.head.komondor.hpc.einfra.hu
  rank      : 12 (local_rank: 0)
  exitcode  : 1 (pid: 149171)
  error_file: <N/A>
  traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
============================================================
Traceback (most recent call last):
  File "/project/p_trancal/trsclbjob/bin/accelerate", line 8, in <module>
    sys.exit(main())
  File "/project/p_trancal/trsclbjob/lib/python3.10/site-packages/accelerate/commands/accelerate_cli.py", line 46, in main
    args.func(args)
  File "/project/p_trancal/trsclbjob/lib/python3.10/site-packages/accelerate/commands/launch.py", line 1044, in launch_command
    multi_gpu_launcher(args)
  File "/project/p_trancal/trsclbjob/lib/python3.10/site-packages/accelerate/commands/launch.py", line 702, in multi_gpu_launcher
    distrib_run.run(args)
  File "/project/p_trancal/trsclbjob/lib/python3.10/site-packages/torch/distributed/run.py", line 803, in run
    elastic_launch(
  File "/project/p_trancal/trsclbjob/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 135, in __call__
    return launch_agent(self._config, self._entrypoint, list(args))
  File "/project/p_trancal/trsclbjob/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 268, in launch_agent
    raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError: 
============================================================
/project/p_trancal/CamLidCalib_Trans/train.py FAILED
------------------------------------------------------------
Failures:
[1]:
  time      : 2024-04-01_05:16:50
  host      : cn01.head.komondor.hpc.einfra.hu
  rank      : 1 (local_rank: 1)
  exitcode  : 1 (pid: 620208)
  error_file: <N/A>
  traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
[2]:
  time      : 2024-04-01_05:16:50
  host      : cn01.head.komondor.hpc.einfra.hu
  rank      : 2 (local_rank: 2)
  exitcode  : 1 (pid: 620209)
  error_file: <N/A>
  traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
[3]:
  time      : 2024-04-01_05:16:50
  host      : cn01.head.komondor.hpc.einfra.hu
  rank      : 3 (local_rank: 3)
  exitcode  : 1 (pid: 620211)
  error_file: <N/A>
  traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
  time      : 2024-04-01_05:16:50
  host      : cn01.head.komondor.hpc.einfra.hu
  rank      : 0 (local_rank: 0)
  exitcode  : 1 (pid: 620204)
  error_file: <N/A>
  traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
============================================================
[2024-04-01 05:16:50,893] torch.distributed.elastic.multiprocessing.api: [ERROR] failed (exitcode: 1) local_rank: 0 (pid: 149923) of binary: /project/p_trancal/trsclbjob/bin/python
[2024-04-01 05:16:50,894] torch.distributed.elastic.multiprocessing.api: [ERROR] failed (exitcode: 1) local_rank: 0 (pid: 151323) of binary: /project/p_trancal/trsclbjob/bin/python
Traceback (most recent call last):
  File "/project/p_trancal/trsclbjob/bin/accelerate", line 8, in <module>
    sys.exit(main())
  File "/project/p_trancal/trsclbjob/lib/python3.10/site-packages/accelerate/commands/accelerate_cli.py", line 46, in main
Traceback (most recent call last):
    args.func(args)
  File "/project/p_trancal/trsclbjob/lib/python3.10/site-packages/accelerate/commands/launch.py", line 1044, in launch_command
  File "/project/p_trancal/trsclbjob/bin/accelerate", line 8, in <module>
    sys.exit(main())
  File "/project/p_trancal/trsclbjob/lib/python3.10/site-packages/accelerate/commands/accelerate_cli.py", line 46, in main
    multi_gpu_launcher(args)
  File "/project/p_trancal/trsclbjob/lib/python3.10/site-packages/accelerate/commands/launch.py", line 702, in multi_gpu_launcher
    distrib_run.run(args)
  File "/project/p_trancal/trsclbjob/lib/python3.10/site-packages/torch/distributed/run.py", line 803, in run
    args.func(args)
  File "/project/p_trancal/trsclbjob/lib/python3.10/site-packages/accelerate/commands/launch.py", line 1044, in launch_command
    multi_gpu_launcher(args)
  File "/project/p_trancal/trsclbjob/lib/python3.10/site-packages/accelerate/commands/launch.py", line 702, in multi_gpu_launcher
    elastic_launch(
  File "/project/p_trancal/trsclbjob/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 135, in __call__
    distrib_run.run(args)
  File "/project/p_trancal/trsclbjob/lib/python3.10/site-packages/torch/distributed/run.py", line 803, in run
    return launch_agent(self._config, self._entrypoint, list(args))
  File "/project/p_trancal/trsclbjob/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 268, in launch_agent
    raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError: 
============================================================
/project/p_trancal/CamLidCalib_Trans/train.py FAILED
------------------------------------------------------------
Failures:
[1]:
  time      : 2024-04-01_05:16:50
  host      : cn03.head.komondor.hpc.einfra.hu
  rank      : 9 (local_rank: 1)
  exitcode  : 1 (pid: 149925)
  error_file: <N/A>
  traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
[2]:
  time      : 2024-04-01_05:16:50
  host      : cn03.head.komondor.hpc.einfra.hu
  rank      : 10 (local_rank: 2)
  exitcode  : 1 (pid: 149927)
  error_file: <N/A>
  traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
[3]:
  time      : 2024-04-01_05:16:50
  host      : cn03.head.komondor.hpc.einfra.hu
  rank      : 11 (local_rank: 3)
  exitcode  : 1 (pid: 149928)
  error_file: <N/A>
  traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
  time      : 2024-04-01_05:16:50
  host      : cn03.head.komondor.hpc.einfra.hu
  rank      : 8 (local_rank: 0)
  exitcode  : 1 (pid: 149923)
  error_file: <N/A>
  traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
============================================================
    elastic_launch(
  File "/project/p_trancal/trsclbjob/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 135, in __call__
    return launch_agent(self._config, self._entrypoint, list(args))
  File "/project/p_trancal/trsclbjob/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 268, in launch_agent
    raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError: 
============================================================
/project/p_trancal/CamLidCalib_Trans/train.py FAILED
------------------------------------------------------------
Failures:
[1]:
  time      : 2024-04-01_05:16:50
  host      : cn02.head.komondor.hpc.einfra.hu
  rank      : 5 (local_rank: 1)
  exitcode  : 1 (pid: 151324)
  error_file: <N/A>
  traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
[2]:
  time      : 2024-04-01_05:16:50
  host      : cn02.head.komondor.hpc.einfra.hu
  rank      : 6 (local_rank: 2)
  exitcode  : 1 (pid: 151326)
  error_file: <N/A>
  traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
[3]:
  time      : 2024-04-01_05:16:50
  host      : cn02.head.komondor.hpc.einfra.hu
  rank      : 7 (local_rank: 3)
  exitcode  : 1 (pid: 151327)
  error_file: <N/A>
  traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
  time      : 2024-04-01_05:16:50
  host      : cn02.head.komondor.hpc.einfra.hu
  rank      : 4 (local_rank: 0)
  exitcode  : 1 (pid: 151323)
  error_file: <N/A>
  traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
============================================================
srun: error: cn01: task 0: Exited with exit code 1
srun: launch/slurm: _step_signal: Terminating StepId=4179654.0
srun: error: cn04: task 3: Exited with exit code 1
srun: error: cn03: task 2: Terminated
srun: error: cn02: task 1: Terminated
srun: Force Terminated StepId=4179654.0

This is my SLURM config:

#!/bin/bash
#SBATCH --job-name=Trial 
#SBATCH --partition=ai
#SBATCH --time=03:00:00
#SBATCH -N 1
#SBATCH --ntasks-per-node=1
#SBATCH --gres=gpu:4
#SBATCH --mem=150GB

export TORCH_NCCL_ASYNC_ERROR_HANDLING=1
export CUDA_LAUNCH_BLOCKING=1

head_node_ip=$(scontrol show hostnames $SLURM_JOB_NODELIST | head -n 1)

#### Call Script #####
export LAUNCHER="accelerate launch \
    --config_file CamLidCalib_Trans/config/disGPU_accelerate.yaml  \
    --num_processes 16 \
    --num_machines $SLURM_NNODES \
    --machine_rank $SLURM_PROCID\
    --main_process_ip $head_node_ip \
    --main_process_port $UID  \
    --rdzv_backend c10d  \
    "
export SCRIPT="/project/p_trancal/CamLidCalib_Trans/train.py"
export CMD="$LAUNCHER $SCRIPT"
NCCL_P2P_DISABLE=1 NCCL_IB_DISABLE=1 srun $CMD

This is my main function:

if __name__ == "__main__":
    print('Start main')
    args = get_parser()
    num_gpus = torch.cuda.device_count()

    transformer_auto_wrapper_policy = functools.partial(
        transformer_auto_wrap_policy,
        transformer_layer_cls={
            EncoderBlock,
        },
    )

    sized_wrap_policy = functools.partial(
        size_based_auto_wrap_policy, min_num_params=20000
    )

    # Pass the advanced FSDP settings not part of the accelerate config by creating fsdp_plugin
    fsdp_plugin = FullyShardedDataParallelPlugin(
        auto_wrap_policy = transformer_auto_wrapper_policy,
        sharding_strategy = ShardingStrategy.FULL_SHARD,
        mixed_precision_policy = MixedPrecision(reduce_dtype =torch.float16),
    )
    # Initialize accelerator
    accelerator = Accelerator(fsdp_plugin=fsdp_plugin)
    print('Check plugin: ', accelerator.state.fsdp_plugin)

    device = accelerator.device
    model = TransformerCalib(device=device, args=args)
    model = accelerator.prepare_model(model)

    dataSet = PreKittiData(root_dir=args.data_root, args=args)
    valid_loader = DataLoader(dataSet.getData(valid=False), batch_size=args.batch_size, drop_last=True, num_workers=4)

    optimizer = torch.optim.Adam(model.parameters(), lr=float(args.learning_rate))
    scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=args.sche_step_size, gamma=args.sche_gamma)
    optimizer, valid_loader, scheduler = accelerator.prepare(optimizer, valid_loader, scheduler)
    writer = SummaryWriter(args.save_writter_path)

    print('Node', accelerator.process_index , 'Start trainning...')
    train(model=model, train_loader=valid_loader, device=device, optimizer=optimizer,
          writer=writer, scheduler=scheduler, args=args)

    print('Node', accelerator.process_index , 'Success')
    writer.close()

And result of print Plugin: Check plugin: FullyShardedDataParallelPlugin(sharding_strategy=<ShardingStrategy.FULL_SHARD: 1>, backward_prefetch=None, mixed_precision_policy=MixedPrecision(param_dtype=None, reduce_dtype=torch.float16, buffer_dtype=None, keep_low_precision_grads=False, cast_forward_inputs=False, cast_root_forward_inputs=True, _module_classes_to_ignore=(<class 'torch.nn.modules.batchnorm._BatchNorm'>,)), auto_wrap_policy=functools.partial(<function size_based_auto_wrap_policy at 0x7ff04fdf7d90>, min_num_params=20000), cpu_offload=CPUOffload(offload_params=True), ignored_modules=None, state_dict_type=<StateDictType.FULL_STATE_DICT: 1>, state_dict_config=FullStateDictConfig(offload_to_cpu=True, rank0_only=True), optim_state_dict_config=FullOptimStateDictConfig(offload_to_cpu=True, rank0_only=True), limit_all_gathers=True, use_orig_params=True, param_init_fn=<function FullyShardedDataParallelPlugin.__post_init__.. at 0x7fefffba2200>, sync_module_states=True, forward_prefetch=False, activation_checkpointing=False)

Versions

Collecting environment information... PyTorch version: 2.2.1+cu121 Is debug build: False CUDA used to build PyTorch: 12.1 ROCM used to build PyTorch: N/A

OS: Red Hat Enterprise Linux release 8.6 (Ootpa) (x86_64) GCC version: (GCC) 8.5.0 20210514 (Red Hat 8.5.0-18) Clang version: Could not collect CMake version: version 3.20.2 Libc version: glibc-2.28

Python version: 3.12.1 | packaged by Anaconda, Inc. | (main, Jan 19 2024, 15:51:05) [GCC 11.2.0] (64-bit runtime) Python platform: Linux-4.18.0-372.9.1.el8.x86_64-x86_64-with-glibc2.28 Is CUDA available: True CUDA runtime version: Could not collect CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: Quadro RTX 6000 Nvidia driver version: 530.30.02 cuDNN version: Probably one of the following: /scratch/software/packages/cuda/12.1/targets/x86_64-linux/lib/libcudnn.so.8.9.5 /scratch/software/packages/cuda/12.1/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.9.5 /scratch/software/packages/cuda/12.1/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.9.5 /scratch/software/packages/cuda/12.1/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.9.5 /scratch/software/packages/cuda/12.1/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.9.5 /scratch/software/packages/cuda/12.1/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.9.5 /scratch/software/packages/cuda/12.1/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.9.5 HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True

CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 96 On-line CPU(s) list: 0-95 Thread(s) per core: 2 Core(s) per socket: 24 Socket(s): 2 NUMA node(s): 2 Vendor ID: AuthenticAMD CPU family: 23 Model: 49 Model name: AMD EPYC 7352 24-Core Processor Stepping: 0 CPU MHz: 2300.000 CPU max MHz: 2300.0000 CPU min MHz: 1500.0000 BogoMIPS: 4591.32 Virtualization: AMD-V L1d cache: 32K L1i cache: 32K L2 cache: 512K L3 cache: 16384K NUMA node0 CPU(s): 0-23,48-71 NUMA node1 CPU(s): 24-47,72-95 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca

Versions of relevant libraries: [pip3] numpy==1.26.4 [pip3] torch==2.2.1 [conda] numpy 1.26.4 pypi_0 pypi [conda] torch 2.2.1 pypi_0 pypi

cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu @penguinwu @fegin @XilunWu @wanchaol @fduwjj @wz337 @tianyu-l @wconstab @yf225 @chauhang

awgu commented 6 months ago

It looks like you are seeing out-of-memory (OOM) because your activation size is too large, which is not directly related to FSDP:

 File "/project/p_trancal/trsclbjob/lib/python3.10/site-packages/torch/nn/modules/activation.py", line 1126, in forward
   attn_mask = F._canonical_mask(
 File "/project/p_trancal/trsclbjob/lib/python3.10/site-packages/torch/nn/functional.py", line 5115, in _canonical_mask
   torch.zeros_like(mask, dtype=target_type)
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 29.07 GiB. GPU 3 has a total capacity of 39.43 GiB of which 25.15 GiB is free. Including non-PyTorch memory, this process has 14.27 GiB memory in use. Of the allocated memory 11.74 GiB is allocated by PyTorch, and 932.02 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)

You may want to check your input activation sizes since you are trying to allocate a 29.07 GiB attn_mask.