Hi, I was following the instructions mentioned in the README. I am finetuning the model on CTW1500 in a kaggle notebook. Every time I use the command :-
I always run into the error - CUDA ran out of memory
For additional information I am using the kaggle configuration of 2 Tesla T4 (i.e. the T4x2 setup) with 14.8GB VRAM (i.e. a total of 29.6 GB VRAM)
It would be really helpful if I could get some help regarding this.
Also I would be grateful if I get to know what GPU(s) and how many GPUs are suitable for the task.
Hi, I was following the instructions mentioned in the README. I am finetuning the model on CTW1500 in a kaggle notebook. Every time I use the command :-
python projects/SWINTS/train_net.py \ --num-gpus 2 \ --config-file projects/SWINTS/configs/SWINTS-swin-mixtrain.yaml
I always run into the error - CUDA ran out of memory
For additional information I am using the kaggle configuration of 2 Tesla T4 (i.e. the T4x2 setup) with 14.8GB VRAM (i.e. a total of 29.6 GB VRAM)
It would be really helpful if I could get some help regarding this. Also I would be grateful if I get to know what GPU(s) and how many GPUs are suitable for the task.