Closed Tetsuo7945 closed 1 year ago
First of all, thanks to the authors of the paper.
Regarding the CUDA memory issue, the error comes from the size of the variables to be stored on the GPU memory. It seems that the code could be improved to allocate less memory on the GPU (even though time performances would be decreased).
Your actual options right now IMHO:
Kate.png
example and
the following parameters for defining the skip
model:
num_channels_down = [16] * 5,
num_channels_up = [16] * 5,
num_channels_skip = [16] * 5,
@Tetsuo7945 if this fixed your problem, can you close it?
已收到!祝每天开开心心!!
Pretty sure I was testing this on colab. I'm no longer using colab, nor have I been attempting to use the software. I'm happy to close the issue.
@KirmTwinty thank you for your input 🙂
I successfully tested your inpainting algorithm for the kate.png and peppers.png on my own image (I changed only this:)
elif ('kate.png' in img_path) or ('peppers.png' in img_path) or ('normal.png' in img_path)
Unfortunately on trying again with another image, I'm getting this error on the main loop:
I have tried restarting the runtime twice and ran torch.cuda.empty_cache() but apparently the memory is still allocated. Would you mind telling a newbie what's going on and how to resolve this?