Hi, when I tried to train my dataset with 02-train.py. I met the problem like this.
The log of that problem is printed as below.
Traceback (most recent call last):
File "02-train.py", line 153, in
train()
File "02-train.py", line 143, in train
salgan_batch_iterator(model, train_data, validation_sample.image.data)
File "02-train.py", line 85, in salgan_batch_iterator
G_obj, D_obj, G_cost = model.G_trainFunction(batch_input, batch_output)
File "/usr/local/lib/python2.7/dist-packages/theano/compile/function_module.py", line 917, in call
storage_map=getattr(self.fn, 'storage_map', None))
File "/usr/local/lib/python2.7/dist-packages/theano/gof/link.py", line 325, in raise_with_op
reraise(exc_type, exc_value, exc_trace)
File "/usr/local/lib/python2.7/dist-packages/theano/compile/function_module.py", line 903, in call
self.fn() if output_subset is None else\
RuntimeError: gpudata_alloc: cuMemAlloc: CUDA_ERROR_OUT_OF_MEMORY: out of memory
Apply node that caused the error: GpuDnnReduction{red_op='add', axis=(1,), acc_dtype='float32', dtype='float32', return_indices=False}(GpuContiguous.0)
Toposort index: 1520
Inputs types: [GpuArrayType(float32, 3D)]
Inputs shapes: [(393216, 2, 256)]
Inputs strides: [(2048, 1024, 4)]
Inputs values: ['not shown']
Outputs clients: [[GpuReshape{4}(GpuDnnReduction{red_op='add', axis=(1,), acc_dtype='float32', dtype='float32', return_indices=False}.0, MakeVector{dtype='int64'}.0)]]
Backtrace when the node is created(use Theano flag traceback.limit=N to make it longer):
File "/usr/local/lib/python2.7/dist-packages/theano/gradient.py", line 1326, in access_grad_cache
term = access_term_cache(node)[idx]
File "/usr/local/lib/python2.7/dist-packages/theano/gradient.py", line 1021, in access_term_cache
output_grads = [access_grad_cache(var) for var in node.outputs]
File "/usr/local/lib/python2.7/dist-packages/theano/gradient.py", line 1326, in access_grad_cache
term = access_term_cache(node)[idx]
File "/usr/local/lib/python2.7/dist-packages/theano/gradient.py", line 1021, in access_term_cache
output_grads = [access_grad_cache(var) for var in node.outputs]
File "/usr/local/lib/python2.7/dist-packages/theano/gradient.py", line 1326, in access_grad_cache
term = access_term_cache(node)[idx]
File "/usr/local/lib/python2.7/dist-packages/theano/gradient.py", line 1021, in access_term_cache
output_grads = [access_grad_cache(var) for var in node.outputs]
File "/usr/local/lib/python2.7/dist-packages/theano/gradient.py", line 1326, in access_grad_cache
term = access_term_cache(node)[idx]
File "/usr/local/lib/python2.7/dist-packages/theano/gradient.py", line 1162, in access_term_cache
new_output_grads)
Hi, when I tried to train my dataset with 02-train.py. I met the problem like this. The log of that problem is printed as below.
Traceback (most recent call last): File "02-train.py", line 153, in
train()
File "02-train.py", line 143, in train
salgan_batch_iterator(model, train_data, validation_sample.image.data)
File "02-train.py", line 85, in salgan_batch_iterator
G_obj, D_obj, G_cost = model.G_trainFunction(batch_input, batch_output)
File "/usr/local/lib/python2.7/dist-packages/theano/compile/function_module.py", line 917, in call
storage_map=getattr(self.fn, 'storage_map', None))
File "/usr/local/lib/python2.7/dist-packages/theano/gof/link.py", line 325, in raise_with_op
reraise(exc_type, exc_value, exc_trace)
File "/usr/local/lib/python2.7/dist-packages/theano/compile/function_module.py", line 903, in call
self.fn() if output_subset is None else\
RuntimeError: gpudata_alloc: cuMemAlloc: CUDA_ERROR_OUT_OF_MEMORY: out of memory
Apply node that caused the error: GpuDnnReduction{red_op='add', axis=(1,), acc_dtype='float32', dtype='float32', return_indices=False}(GpuContiguous.0)
Toposort index: 1520
Inputs types: [GpuArrayType(float32, 3D)]
Inputs shapes: [(393216, 2, 256)]
Inputs strides: [(2048, 1024, 4)]
Inputs values: ['not shown']
Outputs clients: [[GpuReshape{4}(GpuDnnReduction{red_op='add', axis=(1,), acc_dtype='float32', dtype='float32', return_indices=False}.0, MakeVector{dtype='int64'}.0)]]
Backtrace when the node is created(use Theano flag traceback.limit=N to make it longer): File "/usr/local/lib/python2.7/dist-packages/theano/gradient.py", line 1326, in access_grad_cache term = access_term_cache(node)[idx] File "/usr/local/lib/python2.7/dist-packages/theano/gradient.py", line 1021, in access_term_cache output_grads = [access_grad_cache(var) for var in node.outputs] File "/usr/local/lib/python2.7/dist-packages/theano/gradient.py", line 1326, in access_grad_cache term = access_term_cache(node)[idx] File "/usr/local/lib/python2.7/dist-packages/theano/gradient.py", line 1021, in access_term_cache output_grads = [access_grad_cache(var) for var in node.outputs] File "/usr/local/lib/python2.7/dist-packages/theano/gradient.py", line 1326, in access_grad_cache term = access_term_cache(node)[idx] File "/usr/local/lib/python2.7/dist-packages/theano/gradient.py", line 1021, in access_term_cache output_grads = [access_grad_cache(var) for var in node.outputs] File "/usr/local/lib/python2.7/dist-packages/theano/gradient.py", line 1326, in access_grad_cache term = access_term_cache(node)[idx] File "/usr/local/lib/python2.7/dist-packages/theano/gradient.py", line 1162, in access_term_cache new_output_grads)
Could anybody help me please? Thanks.