Closed jamel-mes closed 6 years ago
Hi @jamel-mes
My guess is that your GPU doesn't have enough memory to store the model. What is your GPU model and memory?
UPD:
you can check this by running nvidia-smi
command in the terminal.
I have a 1080 with 8Gb
The model takes ~9Gbs, that's why you are having the out-of-memory error. You can reduce the number of parameters for the generator, which is defined in this block:
hidden_size = 1500
stack_width = 1500
stack_depth = 200
but in this case you will need to train the generative model from scratch, as we provide the pre-trained model only for the configuration above.
Great, thank you for your help!
@jamel-mes
I think, there is another thing you can try in order to squeeze into your 8Gbs of memory without changing the generator. Try reducing batch size in Policy gradient with experience replay and Policy gradient without experience replay steps from default 10 to 5:
for _ in range(n_policy_replay):
rewards.append(RL.policy_gradient_replay(gen_data, replay, threshold=threshold, n_batch=5))
for _ in range(n_policy):
rewards.append(RL.policy_gradient(gen_data, threshold=threshold, n_batch=5))
With this batch size on my machine the model took 6Gbs of memory.
decreasing batch size does the trick!
The model takes ~9Gbs, that's why you are having the out-of-memory error
Is there a way to estimate the memory need beforehand?
@gmseabra technically yes, the values are stores as float32. I would say that the easiest way to reduce memory usage is just decreasing batch size as we discussed below. In this scenario, you can keep using pretrained model and just try multiple batch sizes to see what fits into your GPU memory.
@jamel-mes
I think, there is another thing you can try in order to squeeze into your 8Gbs of memory without changing the generator. Try reducing batch size in Policy gradient with experience replay and Policy gradient without experience replay steps from default 10 to 5:
for _ in range(n_policy_replay):
rewards.append(RL.policy_gradient_replay(gen_data, replay, threshold=threshold, n_batch=5))
for _ in range(n_policy):
rewards.append(RL.policy_gradient(gen_data, threshold=threshold, n_batch=5))
With this batch size on my machine the model took 6Gbs of memory.
I was actually thinking about the possibility of checking the memory size and adjusting n_batch on the fly, depending on the GPU memory available...
But yes, reducing the batch size works for me too (on a GTX 1060, with 6GB mem).
Good afternoon,
I've been using the code from the develop branch with pytorch 0.4. I am having this memory issue below when executing this piece of code from the notebook example:
RuntimeError
:
cuda runtime error (2) : out of memory at /opt/conda/conda-bld/pytorch_1525909934016/work/aten/src/THC/generic/THCTensorMath.cu:35Any idea of what might be causing this problem?