isayev / ReLeaSE

Deep Reinforcement Learning for de-novo Drug Design
MIT License
348 stars 134 forks source link

gpu running out of memory #4

Closed jamel-mes closed 6 years ago

jamel-mes commented 6 years ago

Good afternoon,

I've been using the code from the develop branch with pytorch 0.4. I am having this memory issue below when executing this piece of code from the notebook example:

    ### Transfer learning 
    RL.transfer_learning(transfer_data, n_epochs=n_transfer)
    _, prediction = estimate_and_update(n_to_generate)
    prediction_log.append(prediction)
    if len(np.where(prediction >= threshold)[0])/len(prediction) > 0.15:
        threshold = min(threshold + 0.05, 0.8)

RuntimeError: cuda runtime error (2) : out of memory at /opt/conda/conda-bld/pytorch_1525909934016/work/aten/src/THC/generic/THCTensorMath.cu:35

Any idea of what might be causing this problem?

Mariewelt commented 6 years ago

Hi @jamel-mes

My guess is that your GPU doesn't have enough memory to store the model. What is your GPU model and memory?

UPD: you can check this by running nvidia-smi command in the terminal.

jamel-mes commented 6 years ago

I have a 1080 with 8Gb

Mariewelt commented 6 years ago

The model takes ~9Gbs, that's why you are having the out-of-memory error. You can reduce the number of parameters for the generator, which is defined in this block:

hidden_size = 1500 stack_width = 1500 stack_depth = 200

but in this case you will need to train the generative model from scratch, as we provide the pre-trained model only for the configuration above.

jamel-mes commented 6 years ago

Great, thank you for your help!

Mariewelt commented 6 years ago

@jamel-mes

I think, there is another thing you can try in order to squeeze into your 8Gbs of memory without changing the generator. Try reducing batch size in Policy gradient with experience replay and Policy gradient without experience replay steps from default 10 to 5:

for _ in range(n_policy_replay): rewards.append(RL.policy_gradient_replay(gen_data, replay, threshold=threshold, n_batch=5))

for _ in range(n_policy): rewards.append(RL.policy_gradient(gen_data, threshold=threshold, n_batch=5))

With this batch size on my machine the model took 6Gbs of memory.

jamel-mes commented 6 years ago

decreasing batch size does the trick!

gmseabra commented 6 years ago

The model takes ~9Gbs, that's why you are having the out-of-memory error

Is there a way to estimate the memory need beforehand?

Mariewelt commented 6 years ago

@gmseabra technically yes, the values are stores as float32. I would say that the easiest way to reduce memory usage is just decreasing batch size as we discussed below. In this scenario, you can keep using pretrained model and just try multiple batch sizes to see what fits into your GPU memory.

@jamel-mes

I think, there is another thing you can try in order to squeeze into your 8Gbs of memory without changing the generator. Try reducing batch size in Policy gradient with experience replay and Policy gradient without experience replay steps from default 10 to 5:

for _ in range(n_policy_replay): rewards.append(RL.policy_gradient_replay(gen_data, replay, threshold=threshold, n_batch=5))

for _ in range(n_policy): rewards.append(RL.policy_gradient(gen_data, threshold=threshold, n_batch=5))

With this batch size on my machine the model took 6Gbs of memory.

gmseabra commented 6 years ago

I was actually thinking about the possibility of checking the memory size and adjusting n_batch on the fly, depending on the GPU memory available...

But yes, reducing the batch size works for me too (on a GTX 1060, with 6GB mem).