ResourceExhaustedError (see above for traceback): OOM when allocating tensor of shape [] and type float
[[node lm/RNN_0/rnn/multi_rnn_cell/cell_0/lstm_cell/kernel/Initializer/random_uniform/min (defined at /home/miniconda3/lib/python3.6/site-packages/bilm/training.py:410) = Const[_class=["loc:@lm/RNN_0/rnn/multi_rnn_cell/cell_0/lstm_cell/kernel/Assign"], dtype=DT_FLOAT, value=Tensor<type: float shape: [] values: -0.0185652673>, _device="/job:localhost/replica:0/task:0/device:GPU:0"]()]]
Decrease the batch size or other hyperparameters (vocabulary size, model size, number of negative samples for the softmax) until it fits onto your GPU.
ResourceExhaustedError (see above for traceback): OOM when allocating tensor of shape [] and type float [[node lm/RNN_0/rnn/multi_rnn_cell/cell_0/lstm_cell/kernel/Initializer/random_uniform/min (defined at /home/miniconda3/lib/python3.6/site-packages/bilm/training.py:410) = Const[_class=["loc:@lm/RNN_0/rnn/multi_rnn_cell/cell_0/lstm_cell/kernel/Assign"], dtype=DT_FLOAT, value=Tensor<type: float shape: [] values: -0.0185652673>, _device="/job:localhost/replica:0/task:0/device:GPU:0"]()]]
How can I solved this?
<S>, </S>, <UNK>
)n_gpu=1
,