Lightning-AI / pytorch-lightning

Pretrain, finetune and deploy AI models on multiple GPUs, TPUs with zero code changes.
https://lightning.ai
Apache License 2.0
27.92k stars 3.34k forks source link

PyTorch Lightning DDP crashes with unused parameters #17212

Open athn-nik opened 1 year ago

athn-nik commented 1 year ago

Bug description

When I try to run my code with PL2.0 and Pytorch2.0 using DDP strategy on 2 GPUs I receive the following error which did not happen with previous versions.

    return forward_call(*args, **kwargs)
  File "/home/nathanasiou/.venvs/space_updated/lib/python3.10/site-packages/torch/nn/parallel/distributed.py", line 1139, in forward
    if torch.is_grad_enabled() and self.reducer._rebuild_buckets():
RuntimeError: It looks like your LightningModule has parameters that were not used in producing the loss returned by training_step. If this is intentional, you must enable the detection of unused parameters in DDP, either by setting the string value `strategy='ddp_find_unused_parameters_true'` or by setting the flag in the strategy with `strategy=DDPStrategy(find_unused_parameters=True)`.

Once I change the strategy to strategy=ddp_find_unused_parameters_true the code turns terribly slow i.e. 12 hours each epochs while it was taking 2 minutes or less before.

How can I deal with this? I want some of my parameters to stay frozen and also want to be able to train using DDP and get the speed advantages. Is there a way I can do something with these parameters and switch off find unused parameters flag?

How to reproduce the bug

No response

Error messages and logs

# Error messages and logs here please

Environment

Current environment ```Collecting environment information... PyTorch version: 2.0.0+cu117 Is debug build: False CUDA used to build PyTorch: 11.7 ROCM used to build PyTorch: N/A OS: Ubuntu 20.04.5 LTS (x86_64) GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0 Clang version: Could not collect CMake version: version 3.26.0 Libc version: glibc-2.31 Python version: 3.10.10 (main, Feb 8 2023, 14:50:01) [GCC 9.4.0] (64-bit runtime) Python platform: Linux-5.15.0-67-generic-x86_64-with-glibc2.31 Is CUDA available: True CUDA runtime version: 11.7.64 CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: Quadro RTX 5000 Nvidia driver version: 525.89.02 cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian Address sizes: 46 bits physical, 48 bits virtual CPU(s): 12 On-line CPU(s) list: 0-11 Thread(s) per core: 2 Core(s) per socket: 6 Socket(s): 1 NUMA node(s): 1 Vendor ID: GenuineIntel CPU family: 6 Model: 85 Model name: Intel(R) Xeon(R) W-2133 CPU @ 3.60GHz Stepping: 4 CPU MHz: 3600.000 CPU max MHz: 3900,0000 CPU min MHz: 1200,0000 BogoMIPS: 7200.00 Virtualization: VT-x L1d cache: 192 KiB L1i cache: 192 KiB L2 cache: 6 MiB L3 cache: 8,3 MiB NUMA node0 CPU(s): 0-11 Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable Vulnerability Meltdown: Mitigation; PTI Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable Vulnerability Retbleed: Mitigation; IBRS Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS Not affected Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Mitigation; Clear CPU buffers; SMT vulnerable Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single pti intel_ppin ssbd mba ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req md_clear flush_l1d arch_capabilities Versions of relevant libraries: [pip3] numpy==1.22.3 [pip3] pytorch-lightning==2.0.0 [pip3] torch==2.0.0 [pip3] torchaudio==2.0.1 [pip3] torchmetrics==0.11.4 [pip3] torchvision==0.15.1 [pip3] triton==2.0.0 [conda] No relevant packages ```

More info

No response

carmocca commented 1 year ago

2.0 with ddp_find_unused_parameters_true should match the speed of <2.0 with ddp. Otherwise there might be a bug somewhere

delta-func commented 1 year ago

I think I am having the same issue. Although setting ddp_find_unused_parameters_true is not that slow in my case.

I double-checked with the following code and printed nothing in between enter and exit.

def on_before_optimizer_step(self, optimizer) -> None:
    print("on_before_opt enter")
    for p in self.trainable_params:
        if p.grad is None:
            print(p)
    print("on_before_opt exit")
awaelchli commented 1 year ago

on_before_optimizer_step is not the right place to check grads, because the training_step and backward runs within the optimizer closure. You can inspect your grads in on_after_backward().

The limitation of DDP cannot be overcome. Your forward/backward needs to either use all parameters or we need to allow DDP to "find" which parameters are unused. Lightning 2.0 simply switched the default. You can enable static graph DDPStrategy(find_unused_parameters=True, static_graph=True) to see if you get a speed up.

If there is no clear evidence of a bug, we would close the issue. Let us know how it goes.

delta-func commented 1 year ago

Hi thanks for the reply. I tried on_after_backward(), but I got the same result. Nothing prints in between.

Maybe this is to do with my usage of state_dict_hook which deletes some of the parameters to exclude from checkpointing. Although theoretically I think it should not affect DDP. However If you think this is the cause, I will file it as a separate issue.

def on_after_backward(self) -> None:
    print("on_after_backward enter")
    for p in self.trainable_params:
        if p.grad is None:
            print(p)
    print("on_after_backward exit")
image
zyangH1 commented 1 year ago

I meet the same problem.

image

It outputs the names of the parameters that I need to freeze.

zyangH1 commented 1 year ago

Do I have to set find_unused_parameters = True ?

athn-nik commented 1 year ago

@zyangH1 Yes, you have to do it if you have unused parameters. @awaelchli I am not getting any speedups using 2 devices compared to a single one when completing an epoch i.e. same time to complete an epoch for 1 device with batch size 32 and 2 devices with batch size 16 is this expected as well? I don't find a reason why this happens (previously it was faster)

awaelchli commented 1 year ago

same time to complete an epoch for 1 device with batch size 32 and 2 devices with batch size 16 is this expected as well?

Yes of course that's expected. The single GPU experiment will probably be a bit faster even, since DDP has some communication overhead.

meghana-kshirsagar commented 1 year ago

I have the same issue. I am trying to finetune/retrain a model (ESM) and my initial code was using DDP, where I saw this error. I rewrote my code using pytorch lightning DDP and the error persists. It is very non-deterministic, sometimes an epoch finishes while sometimes it does not.

There are no unused parameters - I tried both find_unused_parameters=True (where it reports: Warning: find_unused_parameters=True was specified in DDP constructor, but did not find any unused parameters in the forward pass) and also the more explicit function as @delta-func proposed. In both cases, there are no unused parameters, yet I get the error:

RuntimeError: It looks like your LightningModule has parameters that were not used in producing the loss returned by training_step. RuntimeError: Rank 2 successfully reached monitoredBarrier, but received errors while waiting for send/recv from rank 0.

Could this be related to disk I/O? Some posts suggested changing num_workers in the dataloader.

Upon setting static_graph=True, I still get the below error

RuntimeError: It looks like your LightningModule has parameters that were not used in producing the loss returned by training_step. If this is intentional, you must enable the detection of unused parameters in DDP, either by setting the string value strategy='ddp_find_unused_parameters_true' or by setting the flag in the strategy with `strategy=DDPStrategy(find_unused_parameters=True)

rob-hen commented 1 year ago

I encountered the same issue. I was manually setting p.requires_grad=False for some parameters. When I was doing this within the configure_optimizers method, I got the same error message as @athn-nik. However, performing the same settings within the constructor __init__, I could run without the error message. @awaelchli Is this intended by design ?

awaelchli commented 1 year ago

@rob-hen Yes this makes sense to me. The model gets wrapped with DDP before optimizers get configured, so that means DDP sees all parameters before you set p.requires_grad=False in configure_optimizers. Doing this right away in the constructor means when DDP wraps the model, it can immediately mark these parameters as "unused". So this observation makes sense to me.

nukes commented 1 year ago

i meet the same problem

soumyadipghosh commented 1 year ago

Is there any update on this issue? I get this error if I use either find_unused_parameters=True or find_unused_parameters=False. Setting static_graph=True also doesn't help.

MehdiDeb commented 1 year ago

This error can appear when there are unused parameters that are not part of the computational graph, because of an 'if' statement for example. For me it was a layer that didn't get used during a forward pass. (also it may look obvious, but if you have multiple models involved in your architecture, the probleme might not come from where you think, particularly if the aformentioned answers show nothing)

There is a way to get the names of the unused parameters. You have to set some environment variable as follows: os.environ["TORCH_CPP_LOG_LEVEL"]="INFO" os.environ["TORCH_DISTRIBUTED_DEBUG"] = "DETAIL"

This way the debug logs are much more detailed and the names of the parameters that did not get a gradient during the backwards pass are shown.

stale[bot] commented 1 year ago

This issue has been automatically marked as stale because it hasn't had any recent activity. This issue will be closed in 7 days if no further activity occurs. Thank you for your contributions - the Lightning Team!

LightingMc commented 1 year ago

Any updates on this issue? I guess the best question to ask is "how to have untrainable parameters in the pl.LightningModule" or "example showing multiple resnets inside a pl.LightningModule"?

I am trying to have a non-trainable ensemble backbone whose features get passed into a trainable resnet.

Been getting the same error. Tried lighting<2.0 as well and am getting the same issue. The only workaround is to declare the "DDPStrategy(find_unused_parameters=True)", which is very slow.

I dont have any unused parameters. Just declaring ensemble backbones inside the pl.LightningModule gives me the same error.

@MehdiDeb suggestion didn't give any more information. @rob-hen suggestion also doesn't work for me. @meghana-kshirsagar suggestion on number workers also didn't work.

tlc121 commented 1 year ago

Same problem when i train ControlNet。If i understand correctly, PL ddp is not suitable for those finetuneing task like SD, Lora or controlnet?

ArtemSivtsov commented 11 months ago

I train pipeline with Unet+EfficientNet backbone and meet the same issue on lightning==2.0.8. Usual "ddp" fails with topic starters error, but ddp_find_unused_parameters_true works fine with previous speed. No huge increase.

Dear authors, how do I need to change my LightningModule to solve this problem and get all power of DDP strategy? Maybe I need to change something in training step as error says?

LightingMc commented 11 months ago

Hey, i was able to fix my issue.

https://github.com/Lightning-AI/lightning/issues/18457

Not really sure why but defining the model like this was giving me errors wrt unused parameters:

class BoringModel(LightningModule): def init(self): super().init()

However, if I defined my models' super like below I stopped getting the error.

class BoringModel(LightningModule): def init(self): super(BoringModel,self).init() # ------------------->>>>>>>>>>>>>>>

ArtemSivtsov commented 11 months ago

@LightingMc thanks for your comment! Funnily enough, but this correction affected the way that model starts to train and goes through 5-6 steps. After 6 steps it craches again with same error. Sad story :(

RuntimeError: It looks like your LightningModule has parameters that were not used in producing the loss returned by training_step. If this is intentional, you must enable the detection of unused parameters in DDP, either by setting the string value strategy='ddp_find_unused_parameters_true' or by setting the flag in the strategy with strategy=DDPStrategy(find_unused_parameters=True).

LightingMc commented 11 months ago

@ArtemSivtsov interesting. I am not facing that issue, but my lighting training does incredibly slow down after about 35 epochs sometimes.

ArtemSivtsov commented 11 months ago

@LightingMc slow down is a bad behavior as well :( lets hope someone from lightning team will help us

carmocca commented 11 months ago

Slowness might be caused by https://github.com/Lightning-AI/lightning/issues/17725. You can verify by setting Trainer(logger=False)

ArtemSivtsov commented 11 months ago

Could ddp_find_unused_parameters_true strategy option result in RAM increase? I see that through my training process RAM consumption is increasing, but I am not sure is problem connected with ddp_find_unused_parameters_true or not. I face it after upgrade from 1.5 to 2.0 version

sagarwal-atg commented 11 months ago

Same Issue. Tried the suggestions here.

ntlm1686 commented 11 months ago

Same issue when running the ControlNet.

premsa commented 11 months ago

Hello guys,

the error in my case is raised with models such as BERT, RoBERTa but it is not raised with DeBERTa in a multi-gpu setting with same lightning training code using transformers.AutoModel. This should be independent, right?

EsamGhaleb commented 11 months ago

Hello everyone,

I have the same issue when trying to fine-tune WAVLM_BASE from Pytorch. Even though there are no non-trainable parameters:

94.8 M    Trainable params
0         Non-trainable params
94.8 M    Total params

I applied the suggestions below, but that did not help either:

RuntimeError: It looks like your LightningModule has parameters that were not used in producing the loss returned by training_step. If this is intentional, you must enable the detection of unused parameters in DDP, either by setting the string value `strategy='ddp_find_unused_parameters_true'` or by setting the flag in the strategy with `strategy=DDPStrategy(find_unused_parameters=True)`.

Does anyone have a solution?

LightingMc commented 11 months ago

Solution: I was able to resolve this error for my particular use case. I was modifying the backbone of a resnet 18. I had to delete the components of the backbone which were being modified.

class Model(nn.module): def init(self): backbone = resnet18() if small==True else resnet50() backbone.requires_grad=False del backbone.fc # This is the new line that is being added which fixes my error. backbone.fc = nn.Identity()

@ArtemSivtsov @carmocca lmk what you think.

carmocca commented 11 months ago

@awaelchli Since an exception gets raised anyway, do you think we could hook into it and print the list of unused parameters? This seems to be a recurring issue with new users and printing the list of troublemakers might help new users find possible solutions for their model.

WilliamHoo commented 11 months ago

do we have an update for problem? had a same problem when i wrapper distilroberta-base model into a PLmodel class.

rushi-the-neural-arch commented 11 months ago

Got solved for me after specifying strategy='ddp_find_unused_parameters_true',

I was manually turning off gradient computation for some parts of the model which seems to be the reason for the error.

trainer = pl.Trainer(accelerator="gpu", devices=-1, max_epochs=args.epochs,   
                        strategy='ddp_find_unused_parameters_true',       
                        callbacks=[ checkpoint_callback,lr_monitor], 
                            )

BUT yes this does slow down training drastically!

nooshinyousefzadeh commented 10 months ago

I solved this by removing the part of the model architecture that was initialized but never used in the training workflow, considered as unused parameters with no gradients.

balintlaczko commented 10 months ago

I have/had the same issue. Removing all the if-else branches in the model init didn't help. Using strategy='ddp_find_unused_parameters_true in the Trainer bypassed the error, but resulted in a very slow performance (training on 4 GPUs were barely faster than training on just one) - as @rushi-the-neural-arch also observed. Then I tried @LightingMc's init tip which more than doubled the speed! There is obviously something there. At this point training with 4 GPUs is around 2.87x faster than training on just one, which I imagine could be better, but for now is acceptable. However I also wonder how I could properly solve the issue. I can't see any parameters in the model that would only be initialized but then never used. I also don't have parameters that have requires_grad=False. I have to try @MehdiDeb's debug tip and see what I find.

donthomasitos commented 8 months ago

Same issue here. I use requires_grad=False on nested models within the module. requires_grad=False is set in their __init__methods. In addition, i use automatic_optimization = False because of GAN-style losses. Is this a by-design pyTorch limitation that those algorithms don't work well with DDP?

EDIT: using the static graph option in DDP solved the performance issue for now.

RylanSchaeffer commented 7 months ago

Has anyone solved this problem? I'm now finding some of my runs are extremely slow, unexpectedly slow.

Gwen-JW commented 6 months ago

Hey, same issue here. I solved this problem by adding on_after_backward() in my LightningModule.

def on_after_backward(self):
    for name, param in self.named_parameters():
        if param.grad is None:
            print(name)

It will print the params that were not used. After freezing these params, DDP works in my case. Hope this can help.

Warhorze commented 6 months ago

Hi all,

I'm on pytorch_lightning==2.2.0, I dealt with the same error which I resolved by the following workaround that is a combination of setting the log levels as described by MehdiDeb. This showed me which paramters were unused (in my case contact_head and position_embeddings i'm training an ESM model on a masked language modeling task). Subsequently, I added the following line to my train/validation step. I first tried to add a callback but that didn't do the trick.

  def training_step(self, batch, batch_idx):
        outputs = self.forward(batch)
        loss = outputs.loss
        loss += sum(
            0.0 * param.sum()
            for name, param in self.named_parameters()
            if "contact_head" in name or "position_embeddings" in name
        )
        return loss

Hopefully this helps someone!

rushi-the-neural-arch commented 5 months ago

Got solved for me after specifying strategy='ddp_find_unused_parameters_true',

I was manually turning off gradient computation for some parts of the model which seems to be the reason for the error.

trainer = pl.Trainer(accelerator="gpu", devices=-1, max_epochs=args.epochs,   
                        strategy='ddp_find_unused_parameters_true',       
                        callbacks=[ checkpoint_callback,lr_monitor], 
                            )

BUT yes this does slow down training drastically!

UPDATE on this : In the latest version of pytorch_lightning 2.2.0, the training happens regularly as fast as it should be if you are using multiple GPUs. Seems the issue is fixed now

nlgranger commented 2 months ago

Even though "ddp_find_unused_parameters_true" works around the issue, it is still a bit confusing to initially hit a crash when using DDP. It's pretty common to need an nn.Module which is is not being optimized.