Hello, I am trying your example code. In the pretrain code, I have generate the penalized_logP train files. But I got the error from the model training step:
Here is the error message:
CUDA out of memory. Tried to allocate 32.00 MiB (GPU 0; 4.00 GiB total capacity; 2.92 GiB already allocated; 3.60 MiB free; 8.01 MiB cached)
I try to change the parameters for batch_size and run again in the"model.train_as_vaelp(train_loader, lr=1e-4)". The error still occurred.
Is that problem on my CUDA?
I use NVIDIA Quadro P620 with CUDA 10.0 ver. (max: 4G RAM)
Torch version is 1.1.0
Hello, I am trying your example code. In the pretrain code, I have generate the penalized_logP train files. But I got the error from the model training step:
Here is the error message:
CUDA out of memory. Tried to allocate 32.00 MiB (GPU 0; 4.00 GiB total capacity; 2.92 GiB already allocated; 3.60 MiB free; 8.01 MiB cached)
I try to change the parameters for batch_size and run again in the"model.train_as_vaelp(train_loader, lr=1e-4)". The error still occurred.
Is that problem on my CUDA? I use NVIDIA Quadro P620 with CUDA 10.0 ver. (max: 4G RAM) Torch version is 1.1.0