Open IUUI11 opened 1 year ago
Hello, are you sure, you check nvidia-smi, than the procces still working? I'm checking and rewriting this repo few time and this repo unfortunately has a little bit small mistackes, like CUDA out of memory when evaluating model, because there isn't with torch.no_grad().
Also, be sure, you use light version of model, only batch_size=1 and other paramaters, which are affect to the model architecture - try to decrease them until your model can stored on GPU device.
Same problem
Same problem, even with "--light=True" memory is running out after 1k steps... NVIDIA GeForce RTX 3060, 12GB GPU Memory...
Same problem, even with "--light=True" memory is running out after 1k steps... NVIDIA GeForce RTX 3060, 12GB GPU Memory...
On my opinion, there is better solution to unpaired transfer style, like VSAIT: Unpaired Image Translation via Vector Symbolic Architectures for example.
Same problem, even with "--light=True" memory is running out after 1k steps... NVIDIA GeForce RTX 3060, 12GB GPU Memory...
On my opinion, there is better solution to unpaired transfer style, like VSAIT: Unpaired Image Translation via Vector Symbolic Architectures for example.
Thanks @kirill-ionkin for pointing it out, I was trying VSAIT recently; for reference I found it here: https://github.com/facebookresearch/vsait
I was trying to run this model on 2080ti,but it always said I do not have enough GPU memory. There is no other process using GPU