I want to train a RNNLM. My vocabulary size is 48603 (cutoff=100) or 72294 (cutoff=50). Data is stored in CTF format which is suitable for sparse data. Training RNN with vocabulary size of 48603 goes fine, but with bigger vocabulary size, say 72294, after 135 epochs, the training process stops due to GPU out of memory!
Hi
I want to train a RNNLM. My vocabulary size is 48603 (cutoff=100) or 72294 (cutoff=50). Data is stored in CTF format which is suitable for sparse data. Training RNN with vocabulary size of 48603 goes fine, but with bigger vocabulary size, say 72294, after 135 epochs, the training process stops due to GPU out of memory!
Training code is as follow:
and the error:
best regards