insilicomedicine / GENTRL

Generative Tensorial Reinforcement Learning (GENTRL) model
605 stars 218 forks source link

Strange error when using the pretrain code #19

Open yaowei2010 opened 4 years ago

yaowei2010 commented 4 years ago

Hello, I am trying your example code. In the pretrain code, I have generate the penalized_logP train files. But I got the error from the model training step:

Here is the error message:

CUDA out of memory. Tried to allocate 32.00 MiB (GPU 0; 4.00 GiB total capacity; 2.92 GiB already allocated; 3.60 MiB free; 8.01 MiB cached)

image

I try to change the parameters for batch_size and run again in the"model.train_as_vaelp(train_loader, lr=1e-4)". The error still occurred.

Is that problem on my CUDA? I use NVIDIA Quadro P620 with CUDA 10.0 ver. (max: 4G RAM) Torch version is 1.1.0