lipiji / DRGD-LCSTS

code for "Deep Recurrent Generative Decoder for Abstractive Text Summarization"
53 stars 14 forks source link

About the environment #2

Open Yudezhi opened 5 years ago

Yudezhi commented 5 years ago

Could you please tell me a little bit about the environment of this project. I configured the theano but get error About "out of memory. "And I tried a lot of methods to fix it. However, it doesn't work.

I beg for your help. Thanks a lot. Thanks a lot. Thanks a lot.

Yudezhi commented 5 years ago

Env: system: Ubuntu16.04 64bit python : 2.7 cuda:8.0 cudnn :5.5 theano:0.9.0 pygpu:1.6.9

Traceback (most recent call last): File "/home/jason/Documents/DRGD-LCSTS/main_lcsts.py", line 465, in run(existing_model_name) File "/home/jason/Documents/DRGD-LCSTS/main_lcsts.py", line 421, in run cost, a, b, c, d, y_pred = model.train(batch.x, batch.y, batch.x_mask, batch.y_mask, consts["batch_size"], consts["lr"]) File "/home/jason/anaconda2/envs/DRGD-LCSTS/lib/python2.7/site-packages/theano/compile/function_module.py", line 917, in call storage_map=getattr(self.fn, 'storage_map', None)) File "/home/jason/anaconda2/envs/DRGD-LCSTS/lib/python2.7/site-packages/theano/gof/link.py", line 325, in raise_with_op reraise(exc_type, exc_value, exc_trace) File "/home/jason/anaconda2/envs/DRGD-LCSTS/lib/python2.7/site-packages/theano/compile/function_module.py", line 903, in call self.fn() if output_subset is None else\ File "pygpu/gpuarray.pyx", line 700, in pygpu.gpuarray.pygpu_empty File "pygpu/gpuarray.pyx", line 301, in pygpu.gpuarray.array_empty pygpu.gpuarray.GpuArrayException: cuMemAlloc: CUDA_ERROR_OUT_OF_MEMORY: out of memory Apply node that caused the error: GpuAlloc(GpuArrayConstant{0.0}, Subtensor{int64}.0, Subtensor{int64}.0, Elemwise{add,no_inplace}.0) Toposort index: 176 Inputs types: [GpuArrayType(float32, scalar), TensorType(int64, scalar), TensorType(int64, scalar), TensorType(int64, scalar)] Inputs shapes: [(), (), (), ()] Inputs strides: [(), (), (), ()] Inputs values: [gpuarray.array(0.0, dtype=float32), array(122), array(300), array(1000)] Outputs clients: [[GpuIncSubtensor{Set;::, ::, int64:int64:}(GpuAlloc.0, GpuSubtensor{int64::}.0, Constant{0}, ScalarFromTensor.0)]]

Backtrace when the node is created(use Theano flag traceback.limit=N to make it longer): File "/home/jason/Documents/DRGD-LCSTS/main_lcsts.py", line 465, in run(existing_model_name) File "/home/jason/Documents/DRGD-LCSTS/main_lcsts.py", line 380, in run model = RNN(modules, consts, options) File "/home/jason/Documents/DRGD-LCSTS/rnn.py", line 51, in init self.define_layers(modules, consts, options) File "/home/jason/Documents/DRGD-LCSTS/rnn.py", line 83, in define_layers self.word_emb = self.concatenate((word_emb_f, word_emb_b[::-1]), word_emb_f.ndim - 1) File "/home/jason/Documents/DRGD-LCSTS/rnn.py", line 230, in concatenate out = T.zeros(output_shape)

HINT: Use the Theano flag 'exception_verbosity=high' for a debugprint and storage map footprint of this apply node.

Process finished with exit code 1