Open Srivishnu27feb opened 1 year ago
Are you using the with torch.no_grad()
context manager (as gradients are not needed at inference time)?
@NielsRogge yes I am using it with context manager.. I have a pdf document with 70 pages so I am using pdf2image conversion and passing it to the model for infer in my gpu machine.
Hi Team, I have a pdf of 20 pages and I am using pdf2image library to convert to images and passing each image for detection using threads and I could see the memory gets piles up and is not getting deallocated unless the entire application is exited... I tried debugging the code just when using table detection or table structure det the memory gets piledup.. in my flask application i am loading the model once and reusing in my function for infering each pages of pdf and also tried gc.collect() and del the variables but no luck... Is there any work around that could help in release the memory?