zpschang / seqGAN

dialogue generation with seqGAN
31 stars 4 forks source link

关于代码报错(TypeError: __new__() missing 1 required positional argument: 'attention_state') #2

Open fastcode3d opened 5 years ago

fastcode3d commented 5 years ago

我尝试自己生成了数据集,运行analyzer.py生成了在训练时reader.py中reader所需的file_name_word。 在运行时发生了如下错误, File "model.py", line 86, in build_attention_state self.attention, self.time, self.alignments, tuple([])) TypeError: new() missing 1 required positional argument: 'attention_state'

对应的代码为: return tf.contrib.seq2seq.AttentionWrapperState(cell_state, self.attention, self.time, self.alignments, tuple([])) 我查看了tensorflow文档,对应的参数声明如下 new( _cls, cell_state, attention, time, alignments, alignment_history, attention_state ) 这里alignment_history,attention_state两个参数不太明确,请问tuple([])对应的哪个参数?另一个参数的值应该设为什么?我尝试加了一个tuple([]),但是依旧报错了。信息如下: More specifically: Substructure "type=tuple str=()" is a sequence, while substructure "type=Tensor str=Tensor("g_model/decoder/decoder/while/BasicDecoderStep/decoder/attention_wrapper/Softmax_2:0", shape=(?, 40), dtype=float32)" is not

fastcode3d commented 5 years ago

model.py第122行这边,我将原本使用的partial_decoder_state注释掉,替换为decoder_init_state之后可以正常运行。我在Neural Machine Translation (seq2seq) Tutorial中也没有找到这一块内容,想问一下为什么要使用partial_decoder_state?

decoder_partial = tf.contrib.seq2seq.BasicDecoder(decoder_cell, helper_partial, partial_decoder_state,output_layer=projection_layer)

decoder_partial = tf.contrib.seq2seq.BasicDecoder(decoder_cell, helper_partial, decoder_init_state, output_layer=projection_layer)

SkyeWwq commented 5 years ago

我尝试自己生成了数据集,运行analyzer.py生成了在训练时reader.py中reader所需的file_name_word。 在运行时发生了如下错误, File "model.py", line 86, in build_attention_state self.attention, self.time, self.alignments, tuple([])) TypeError: new() missing 1 required positional argument: 'attention_state'

对应的代码为: return tf.contrib.seq2seq.AttentionWrapperState(cell_state, self.attention, self.time, self.alignments, tuple([])) 我查看了tensorflow文档,对应的参数声明如下 new( _cls, cell_state, attention, time, alignments, alignment_history, attention_state ) 这里alignment_history,attention_state两个参数不太明确,请问tuple([])对应的哪个参数?另一个参数的值应该设为什么?我尝试加了一个tuple([]),但是依旧报错了。信息如下: More specifically: Substructure "type=tuple str=()" is a sequence, while substructure "type=Tensor str=Tensor("g_model/decoder/decoder/while/BasicDecoderStep/decoder/attention_wrapper/Softmax_2:0", shape=(?, 40), dtype=float32)" is not

hello,你的这个问题解决了吗?

fastcode3d commented 5 years ago

没有解决,我当时好像注释掉了这句,然后就可以运行了,decoder好像有很多种模式.

SkyeWwq commented 5 years ago

谢谢回复,那个是版本问题,在tensorflow=1.5的时候就可以运行了