There are two approaches based on CLIP that i'm trying to compare here
1) A resnet 18 with a Bert base model - everything is updated during training
2) A resnet 50 with a Bert base model - Bert is frozen
I get an OOM error in the second case on the cached model_forward step even though the second case uses lesser number of parameters during training (50 M vs 110 M).
To give some context, I'm using pytorch lightning with the functional decorator and it works well for the first case - providing a lot of benefits with bigger batch sizes during training
There are two approaches based on CLIP that i'm trying to compare here 1) A resnet 18 with a Bert base model - everything is updated during training 2) A resnet 50 with a Bert base model - Bert is frozen
I get an OOM error in the second case on the cached model_forward step even though the second case uses lesser number of parameters during training (50 M vs 110 M).
To give some context, I'm using pytorch lightning with the functional decorator and it works well for the first case - providing a lot of benefits with bigger batch sizes during training
Any reason why this would happen ?