Hello,
When l runcrnn_main.py with more than 15000 training exmples l get the following error :
and also when l run it with less than 10000 examples but l execute 3 or more times :
THCudaCheck FAIL file=/py/conda-bld/pytorch_1493676237139/work/torch/lib/THC/generic/THCStorage.cu line=66 error=2 : out of memory
Traceback (most recent call last):
File "crnn_main.py", line 222, in <module>
cost = trainBatch(crnn, criterion, optimizer)
File "crnn_main.py", line 207, in trainBatch
cost.backward()
File "/home/ahmed/anaconda3/envs/cv/lib/python2.7/site-packages/torch/autograd/variable.py", line 146, in backward
self._execution_engine.run_backward((self,), (gradient,), retain_variables)
File "/home/ahmed/anaconda3/envs/cv/lib/python2.7/site-packages/torch/nn/_functions/thnn/auto.py", line 171, in backward
grad_input = input.new().resize_as_(input)
RuntimeError: cuda runtime error (2) : out of memory at /py/conda-bld/pytorch_1493676237139/work/torch/lib/THC/generic/THCStorage.cu:66
How can l circumvent that ?
Thank you
Hello, When l run
crnn_main.py
with more than 15000 training exmples l get the following error :and also when l run it with less than 10000 examples but l execute 3 or more times :