Closed Ahmad-Hammoudeh closed 1 year ago
May be you could down-scale the image size or use "torch.utils.checkpoint" to reduce memory cost.
Thanks for your response. I managed to solve the problem using (with torch.no_grad()) which reduced the memory consumption a lot.
Thanks for your awesome work, it gives really nice results!!
I faced a problem when running the demo on an image with resolution 1920x1080 or passing several images (about 5 images) of size (256, 128) to the model:
and this is the traceback:
I'm using GTX 1060. I tried several values of 'max_split_size_mb ' with no luck. My question is, is it normal to run out of GPU memory when running the code?
Thanks in advance