Closed nivibilla closed 9 months ago
Hi, Thanks a lot for your interest in the INSTRUCTOR model!
You may try to turn on the evaluation mode of the model in inference. You may also include with torch.no_grad():
to avoid gradients in the calculation.
Feel free to add any further questions or comments!
I see. Let me try it out and update you on the results
Same issue, did you solve this @nivibilla ?
No, but I think it's an issue with jupyter notebook and pytorch. Not this repo. Not sure.
Feel free to re-open this issue if you have any further questions or comments!
I am doing batch inference over a very large dataset. And I see that slowly over time I become OOM even though I am deleting all variables assigned. Here is the code
But after each iteration of the loop. there is some residual memory being retained by the gpu. The model takes 5685mb of memory. But after each loop this number increases slightly. So after enough loops I run OOM. Could you tell me where the memory leak may be?