Open LeonNerd opened 1 year ago
There is a memory leak in the forward function of YOLOV5TorchObjectDetector when assigning sliced tensors back to the original tensor, since the computational graph is kept. Going to submit a PR to fix it (since I need the computational graph), but if you don't need the computational graph wrapping your code in with torch.no_grad():
should fix the issue.
When I cycled through multiple images, and for each image, I cycled through multiple layers of the model, I saw an abnormal increase in memory. I just increased the outer loop, and I found that every time I run self.model(images), it causes it to grow