kirarenctaon / timenet

36 stars 9 forks source link

code and paper #1

Open AldrichLeo opened 5 years ago

AldrichLeo commented 5 years ago

Sir,I have read your paper " Time Net: Pre-trained deep recurrent neuralnetwork for time series classification" for this code. In this paper you mentioned "Unlike the conventional approach of feeding an input to the decoder at eachtime step during training and inference [5], the only inputs the decoder gets are the embedding for the time series (final hidden state of encoder), and the steps T for which the decoder has to be iterated in order to reconstruct the input." But there are another inputs for decoder in the code, outputs, states = tf.contrib.legacy_seq2seq.basic_rnn_seq2seq([encoder_inputs], [decoder_inputs], cell) Does it mean that the inputs for encoder which are used to decoder are not additional outputs for decoder? Moreover,the paper writes "The encoder-decoder pair is trained in an unsupervised manner as a sequence auto-encoder (SAE) to reconstruct the input time series so as to minimize the objective image But it is shows different cost function:

targets = tf.placeholder(tf.int64, [None], name="targets")
......
cost = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits, labels=targets))
.......
 _, tr_loss = sess.run([optimizer, cost],
                              feed_dict={encoder_inputs: X_batch, decoder_inputs: X_batch, targets: y_batch}).

I think it is not an unsupervied manner.Maybe i misunderstand what you mean.May you explain the questions for me?I will appreciate if you can help me .

afrazsalim commented 3 years ago

Indeed, i have gone through the code and their code does not do what they wrote in paper.