manohar029 / TimeSeries-Seq2Seq-deepLSTMs-Keras

This project aims to give you an introduction to how Seq2Seq based encoder-decoder neural network architectures can be applied on time series data to make forecasts. The code is implemented in pyhton with Keras (Tensorflow backend).
41 stars 30 forks source link

Error when I create_model with bidirectional #2

Open palvors opened 4 years ago

palvors commented 4 years ago

Hi, I got a error when I tried to execute create_model([6],bidirectional=True).

"List index out of range"

the problem came to line

temp = concatenate([bi_encoder_states[i],bi_encoder_states[2*n_layers + i]], axis=-1)

bi_encoder_states had 2 entries => [ 0,1] n_layers = 1

so with the for loop range : for i in range(int(len(bi_encoder_states)/2)):

(len(bi_encoder_states)/2) = 2/2 = 1

and when you execute :[2*n_layers + i] where i = 0

2n_layers + i = 2 1+0 = 2 ---> Error out of range

is it the last version of code ? I try to run you notebook,m but I got this error.

Thank you

HonzaBejvl commented 4 years ago

Thank you for the awesome post.

I would like to play with your model, but when I try the bidirectional flag I got the exact same problem as palvors described above. @manohar029 May you please direct us in the right direction on how to prepare encoder_states to initialize decoder_lstm?

    encoder_states = []
    for i in range(int(len(bi_encoder_states)/2)):
        temp = concatenate([bi_encoder_states[i],bi_encoder_states[2*n_layers + i]], axis=-1)
        encoder_states.append(temp)
DavidArenburg commented 4 years ago

@palvors You are probably using TensorFlow 2. I had the same issue.

First of all, you need to import the layers in the following manner

import tensorflow as tf
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Input, LSTM, Dense, LSTMCell, RNN, Bidirectional, concatenate
from tensorflow.keras.optimizers import Adam

Second of all, it seems like the layers structure have changed in TF2, I've fiddled with it quite a while until (I think) I found a fix. Try the following:

def create_model(layers, bidirectional = False):

    n_layers = len(layers)

    ## Encoder
    encoder_inputs = Input(shape = (None, n_in_features))
    lstm_cells = [LSTMCell(hidden_dim) for hidden_dim in layers]

    if bidirectional:

        encoder = Bidirectional(RNN(lstm_cells, return_state=True))
        encoder_outputs_and_states = encoder(encoder_inputs)
        bi_encoder_states = encoder_outputs_and_states[1:]
        encoder_states = []

        for i in range(int(len(bi_encoder_states) / 2)):

            temp = []
            for j in range(2):

                temp.append(concatenate([bi_encoder_states[i][j], bi_encoder_states[n_layers + i][j]], axis = -1))

            encoder_states.append(temp)
    else:  

        encoder = RNN(lstm_cells, return_state = True)
        encoder_outputs_and_states = encoder(encoder_inputs)
        encoder_states = encoder_outputs_and_states[1:]

    ## Decoder
    decoder_inputs = Input(shape = (None, n_out_features))

    if bidirectional:

        decoder_cells = [LSTMCell(hidden_dim*2) for hidden_dim in layers]
    else:

        decoder_cells = [LSTMCell(hidden_dim) for hidden_dim in layers]

    decoder_lstm = RNN(decoder_cells, return_sequences = True, return_state=True)
    decoder_outputs_and_states = decoder_lstm(decoder_inputs, initial_state = encoder_states)
    decoder_outputs = decoder_outputs_and_states[0]
    decoder_dense = Dense(n_out_features) 
    decoder_outputs = decoder_dense(decoder_outputs)

    model = Model([encoder_inputs,decoder_inputs], decoder_outputs)
    return model
523a commented 4 years ago

David, thanks for your post. They helped a lot. Everything is working)))

huangboyua commented 9 months ago

thank you david you are my dad