huggingface / transformers

🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
https://huggingface.co/transformers
Apache License 2.0
134.33k stars 26.86k forks source link

End2end pytorch lightning errors #25525

Closed albertsun1 closed 1 year ago

albertsun1 commented 1 year ago

System Info

Who can help?

@shamanez

Information

Tasks

Reproduction

Hey @shamanez, I'm attempting to run sh ./test_run/test_finetune.sh using one GPU. Unfortunately, I've been running into errors with PyTorch lightning. I've tried using PyTorch Lightning version 1.6.4 as recommended in the requirements.txt, but I've gotten errors. This other thread seemed to get the same type of bugs: #22210

pytorch_lightning.utilities.exceptions.MisconfigurationException: The provided lr scheduler LambdaLR doesn’t follow PyTorch’s LRScheduler API. You should override the [LightningModule.lr](http://lightningmodule.lr/)_scheduler_step hook with your own logic if you are using a custom LR scheduler.

I've also experimented with other versions to see if I could get it fixed, but it still doesn't work:

pytorch_lightning.utilities.exceptions.MisconfigurationException: You passed devices=auto but haven’t specified accelerator=(‘auto’|'tpu’|'gpu’|'ipu’|'cpu’) for the devices mapping, got accelerator=None.

I tried adding acclerator=‘gpu’ or accelerator=‘auto’ as parameters to the Trainer code, but doing either simply gave me the same error.

Expected behavior

I'd expect the code to train a RAG end-to-end model, but it has this bug before we can start training the model.

shamanez commented 1 year ago

I guess you are also using a newer Transformers version. My advice is to use the latest Transformers and Lightning. I can help with debugging the lightning errors.

albertsun1 commented 1 year ago

Hey, thanks so much for responding so quickly. When I upgraded to the latest stable version of Lightning (2.0.7) and Transformers (4.31), I ran into an issue where the most recent update of PyTorch Lightning removed support for pl.Trainer.add_argparse_args (https://github.com/hpcaitech/ColossalAI/issues/2938). As such, I got the following error:

Traceback (most recent call last):
  File "/home/albertsun/transformers/examples/research_projects/rag-end2end-retriever/finetune_rag.py", line 810, in <module>
    parser = pl.Trainer.add_argparse_args(parser)
AttributeError: type object 'Trainer' has no attribute 'add_argparse_args'

I'm not too familiar with PyTorch Lightning; do you know if there's a work-around for this parser code in finetune_rag.py? Thanks!

shamanez commented 1 year ago

yes, the latest version of Trainer doesn't have such a thing.

You can manually enter.

https://lightning.ai/docs/pytorch/stable/common/trainer.html

github-actions[bot] commented 1 year ago

This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.

Please note that issues that do not follow the contributing guidelines are likely to be ignored.

Rakin061 commented 4 months ago

Hello @albertsun1 I'm facing the same issue while training. Could you please give me any light on how could you manage to solve this error ??