Open switchkiller opened 7 years ago
2GB and 4GB GPU is not enough! 12GB GPU (Amazon p2 instance) can resolve your out of memory error. Its true that iteration might takes hours.
1.Are we training only nb_train_samples no of data? =>Yes 2.How much data does one iteration train? => as much data as you have, the more data, the better accuracy you have. Dont expect good result from 1000 training samples training as it will overfit the model. At least train on 50,000 and above examples.
Hi @switchkiller
Did you get your code working?
Thanks, Dhruven Vora
@ramtest55 thank you so much for your information!
@dhruvenvora although my code is working very well, I am not able to train it on my system.
Hello @udibr
Thank you for the awesome project!
However, I am stuck at training the model as done in train.ipynb. I have a 8 core CPU and Nvidia GTX 660M (2 GB) which is pretty standard. I created vocabulary-embedding using 100K dataset (News articles) Using tensorflow, I encountered resource error (Out of memory). Using theano and reducing the batch size resolved it.
But completion of iteration 0 itself takes around 3-4 hrs
Here is what I dont understand.
I know it might sound noob, but I am really interested in it. I am a undergrad, and trying my best to learn.
Waiting for your early reply!