Shivanandroy / simpleT5

simpleT5 is built on top of PyTorch-lightning⚡️ and Transformers🤗 that lets you quickly train your T5 models.
MIT License
382 stars 61 forks source link

Data parallelism technique for Training simplet5 model – CUDA out of memory proplem #56

Open NashaatRJ opened 1 year ago

NashaatRJ commented 1 year ago

I have an AWS SageMaker instance consisting of 8 GPUs, each with 32 gigabytes of memory. However, when I attempted to train a SimpleT5 model for a text summarization task using high parameter settings, I encountered a CUDA out of memory error as a single GPU with 32GB of memory was insufficient for the task. Could you please help me resolve this issue through converting the training model via data parallelism or any other suitable methods?

Could you please help me resolve this issue through converting the training model via data parallelism or any other suitable methods?