kanishkamisra / minicons

Utility for behavioral and representational analyses of Language Models
https://minicons.kanishka.website
MIT License
117 stars 29 forks source link

Using minicons wih XLM-R-Base and got GPU out of memory (the GPU size is 80GB) #46

Closed fajri91 closed 7 months ago

fajri91 commented 8 months ago

Hi, I tried to compute this with a sentence of 350 words and got GPU OOM.

mlm_model = scorer.MaskedLMScorer('xlm-roberta-base', 'cuda')
mlm_model.sequence_score(stimuli, reduction = lambda x: -x.sum(0).item())

Is this case normal?

kanishkamisra commented 8 months ago

this could be normal since the MLM scoring method masks one word at a time and then computes the logits for all the 350 tokens in one go -- this would amount to a batch size of 350 which would be super huge.. I might have to go in and find a way to create sub-batches for when this happens but I'm currently out of bandwidth. In case you'd like to take a look please feel free to make a PR!