lpq29743 / IAN

A TensorFlow implementation for "Interactive Attention Networks for Aspect-Level Sentiment Classification"
MIT License
99 stars 40 forks source link

您好,有问题请教 #2

Closed PrivateThink closed 6 years ago

PrivateThink commented 6 years ago

如果将源码中的LSM换成双向的LSTM,应该如何修改代码呢? 我自己修改了,但是出现问题了,麻烦您能看看怎么解决

lpq29743 commented 6 years ago

你把修改后的代码贴一下

PrivateThink commented 6 years ago

注释的是你的代码,其他是修改的

就修改了两处

        #     aspect_outputs, aspect_state = tf.nn.dynamic_rnn(
        #     tf.contrib.rnn.LSTMCell(self.n_hidden),
        #     inputs=aspect_inputs,
        #     sequence_length=self.aspect_lens,
        #     dtype=tf.float32,
        #     scope='aspect_lstm'
        # )
        # Forward direction cell
        aspect_lstm_fw_cell = tf.contrib.rnn.LSTMCell(self.n_hidden)
        # Backward direction cell
        aspect_lstm_bw_cell = tf.contrib.rnn.LSTMCell(self.n_hidden)
        #双向的LSTM
        print("aspect_inputs",aspect_inputs)
        aspect_outputs, _, =tf.nn.bidirectional_dynamic_rnn(cell_fw=aspect_lstm_fw_cell,
                                       cell_bw=aspect_lstm_bw_cell,
                                       inputs=aspect_inputs,
                                       sequence_length=self.aspect_lens,
                                       dtype=tf.float32,
                                       scope='aspect_bilstm')
        ## Concat the outputs to create the new input.
        aspect_outputs=tf.concat(aspect_outputs, 2)
        batch_size = tf.shape(aspect_outputs)[0]
        aspect_avg = tf.reduce_mean(aspect_outputs, 1)

        # context_outputs, context_state = tf.nn.dynamic_rnn(
        #     tf.contrib.rnn.LSTMCell(self.n_hidden),
        #     inputs=context_inputs,
        #     sequence_length=self.context_lens,
        #     dtype=tf.float32,
        #     scope='context_lstm'
        # )

        context_lstm_fw_cell = tf.contrib.rnn.LSTMCell(self.n_hidden)
        # Backward direction cell
        context_lstm_bw_cell = tf.contrib.rnn.LSTMCell(self.n_hidden)
        # 双向的LSTM

        context_outputs, _ = tf.nn.bidirectional_dynamic_rnn(cell_fw=context_lstm_fw_cell,
                                                              cell_bw=context_lstm_bw_cell,
                                                              inputs=context_inputs,
                                                              sequence_length=self.context_lens,
                                                              dtype=tf.float32,
                                                              scope='context_bilstm')
        context_outputs=tf.concat(context_outputs, 2)
        context_avg = tf.reduce_mean(context_outputs, 1)
lpq29743 commented 6 years ago

粗略看好像没什么问题,具体的问题是?

PrivateThink commented 6 years ago

这个是错误,劳烦您看下 Training ... Traceback (most recent call last): File "C:\software\Python35\lib\site-packages\tensorflow\python\client\session.py", line 1361, in _do_call return fn(*args) File "C:\software\Python35\lib\site-packages\tensorflow\python\client\session.py", line 1340, in _run_fn target_list, status, run_metadata) File "C:\software\Python35\lib\site-packages\tensorflow\python\framework\errors_impl.py", line 516, in exit c_api.TF_GetCode(self.status.status)) tensorflow.python.framework.errors_impl.InvalidArgumentError: Matrix size-incompatible: In[0]: [23,600], In[1]: [300,300] [[Node: dynamic_rnn/while/MatMul = MatMul[T=DT_FLOAT, transpose_a=false, transpose_b=false, _device="/job:localhost/replica:0/task:0/device:CPU:0"](dynamic_rnn/while/TensorArrayReadV3, dynamic_rnn/while/MatMul/Enter)]]

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "G:/paperCode/IAN-master/main.py", line 44, in tf.app.run() File "C:\software\Python35\lib\site-packages\tensorflow\python\platform\app.py", line 126, in run _sys.exit(main(argv)) File "G:/paperCode/IAN-master/main.py", line 40, in main model.run(train_data, test_data) File "G:\paperCode\IAN-master\BIANmodel.py", line 266, in run train_loss, train_acc = self.train(traindata) File "G:\paperCode\IAN-master\BIANmodel.py", line 214, in train , loss, step, summary = self.sess.run([self.optimizer, self.cost, self.global_step, self.train_summary_op], feed_dict=sample) File "C:\software\Python35\lib\site-packages\tensorflow\python\client\session.py", line 905, in run run_metadata_ptr) File "C:\software\Python35\lib\site-packages\tensorflow\python\client\session.py", line 1137, in _run feed_dict_tensor, options, run_metadata) File "C:\software\Python35\lib\site-packages\tensorflow\python\client\session.py", line 1355, in _do_run options, run_metadata) File "C:\software\Python35\lib\site-packages\tensorflow\python\client\session.py", line 1374, in _do_call raise type(e)(node_def, op, message) tensorflow.python.framework.errors_impl.InvalidArgumentError: Matrix size-incompatible: In[0]: [23,600], In[1]: [300,300] [[Node: dynamic_rnn/while/MatMul = MatMul[T=DT_FLOAT, transpose_a=false, transpose_b=false, _device="/job:localhost/replica:0/task:0/device:CPU:0"](dynamic_rnn/while/TensorArrayReadV3, dynamic_rnn/while/MatMul/Enter)]]

Caused by op 'dynamic_rnn/while/MatMul', defined at: File "G:/paperCode/IAN-master/main.py", line 44, in tf.app.run() File "C:\software\Python35\lib\site-packages\tensorflow\python\platform\app.py", line 126, in run _sys.exit(main(argv)) File "G:/paperCode/IAN-master/main.py", line 39, in main model.build_model() File "G:\paperCode\IAN-master\BIANmodel.py", line 161, in buildmodel , aspect_rep_final, aspect_att_final = tf.while_loop(cond=condition, body=body, loop_vars=(0, aspect_rep, aspect_att)) File "C:\software\Python35\lib\site-packages\tensorflow\python\ops\control_flow_ops.py", line 3096, in while_loop result = loop_context.BuildLoop(cond, body, loop_vars, shape_invariants) File "C:\software\Python35\lib\site-packages\tensorflow\python\ops\control_flow_ops.py", line 2874, in BuildLoop pred, body, original_loop_vars, loop_vars, shape_invariants) File "C:\software\Python35\lib\site-packages\tensorflow\python\ops\control_flow_ops.py", line 2814, in _BuildLoop body_result = body(*packed_vars_for_body) File "G:\paperCode\IAN-master\BIANmodel.py", line 153, in body aspect_score = tf.reshape(tf.nn.tanh(tf.matmul(tf.matmul(a, weights['aspect_score']), tf.reshape(b, [-1, 1])) + biases['aspect_score']), [1, -1]) File "C:\software\Python35\lib\site-packages\tensorflow\python\ops\math_ops.py", line 2064, in matmul a, b, transpose_a=transpose_a, transpose_b=transpose_b, name=name) File "C:\software\Python35\lib\site-packages\tensorflow\python\ops\gen_math_ops.py", line 2790, in _mat_mul name=name) File "C:\software\Python35\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 787, in _apply_op_helper op_def=op_def) File "C:\software\Python35\lib\site-packages\tensorflow\python\framework\ops.py", line 3271, in create_op op_def=op_def) File "C:\software\Python35\lib\site-packages\tensorflow\python\framework\ops.py", line 1650, in init self._traceback = self._graph._extract_stack() # pylint: disable=protected-access

InvalidArgumentError (see above for traceback): Matrix size-incompatible: In[0]: [23,600], In[1]: [300,300] [[Node: dynamic_rnn/while/MatMul = MatMul[T=DT_FLOAT, transpose_a=false, transpose_b=false, _device="/job:localhost/replica:0/task:0/device:CPU:0"](dynamic_rnn/while/TensorArrayReadV3, dynamic_rnn/while/MatMul/Enter)]]

lpq29743 commented 6 years ago

双向 LSTM 输出的隐藏向量的 size 是单向 LSTM 输出的隐藏向量的 size 的两倍,所以你需要修改权值的大小