Open kamal94 opened 8 years ago
it is super resource intensive yes. I saw elsewhere that Keras does a lot of memory leaks. I used to have a tensorflow only implementation that seemed lighter. But it was less convenient, that was why I opted for Keras in the release.
@kamal94 : Were you able to resolve that issue? I am having the same problem and my train fails sometimes on epoch 1/200 or 2/200 and never goes beyond that. Any suggestions??
how do you train the train_generative_model.py autoencoder successfully ,i meet some difficuty , have to doing somehting in code?
Have you solved this issue? I am having the same problem and my train fails sometimes on epoch 10/200 or 40/200 and never goes beyond that. Any suggestions?
Traceback (most recent call last):
File "./train_generative_model.py", line 168, in
After training the autoencode, i try to train the transition model as described by the same document.
using
and
on two different tmux sessions.
Soon (a minute) after running the training command, the process is killed because my memory and swap (16 + 10 GB) are used up, and I'm still on epoch one.
Here is a dump: