damientseng / Seq2Seq-Chatbot

A theano implementation of the neural conversational model
MIT License
21 stars 16 forks source link

couldn't run on gpu #2

Closed iamsile closed 8 years ago

iamsile commented 8 years ago

Hi, @saltypaul , after update, I run it on tesla k20c, but when on the 300/20000 epoch, there is something wrong in the program.

Traceback (most recent call last): File "build_model.py", line 76, in model = build_model(retrain=True) File "build_model.py", line 55, in build_model loss, costs = model.train(enIpt, enMsk, deIpt, deMsk, deTgt) File "/export/taowei/work/Seq2Seq/src/seq2seq.py", line 175, in train return self._train(encoderInputs, encoderMask, decoderInputs, decoderMask, decoderTarget) File "/usr/lib/python2.7/site-packages/theano/compile/function_module.py", line 875, in call storage_map=getattr(self.fn, 'storage_map', None)) File "/usr/lib/python2.7/site-packages/theano/gof/link.py", line 325, in raise_with_op reraise(exc_type, exc_value, exc_trace) File "/usr/lib/python2.7/site-packages/theano/compile/function_module.py", line 862, in call self.fn() if output_subset is None else\ IndexError: One of the index value is out of bound. Error code: 65535.\n Apply node that caused the error: GpuAdvancedSubtensor1(Encoder LookUpTable, Elemwise{Cast{int64}}.0) Toposort index: 98 Inputs types: [CudaNdarrayType(float32, matrix), TensorType(int64, vector)] Inputs shapes: [(29331, 512), (500,)] Inputs strides: [(512, 1), (8,)] Inputs values: ['not shown', 'not shown'] Outputs clients: [[GpuReshape{3}(GpuAdvancedSubtensor1.0, MakeVector{dtype='int64'}.0)]]

Do you know how to fix it? Thank you very much!

damientseng commented 8 years ago

What data set are you using? The "Encoder LookUpTable" has shape (vocabulary_size, hidden_size). The build_model method accepts as the first argument the vocabulary_size, which should by the size of the token set of you data.

iamsile commented 8 years ago

I use the origin data in the project, and I didn't modify any parameters. I set the shape is (vocabulary_size, hidden_size)

damientseng commented 8 years ago

That's odd...my vocabulary size is 29331, because in make_convs.py I trimed off tokens that appear less than 5 time in total. Obvious your real vocabulary size is much larger.

iamsile commented 8 years ago

Thank you ! Let me try to solve it.

iamsile commented 8 years ago

@saltypaul , you're right! It's my fault which I set a small vocab_size ! Thank you very much to help me ! Thank you!