nengo / keras-lmu

Keras implementation of Legendre Memory Units
https://www.nengo.ai/keras-lmu/
Other
209 stars 35 forks source link

mackey-glass problem with LMU giving out the error with the recent updates of keras-lmu 0.2.0 and 0.3.0 versions #25

Open samarsreddy opened 3 years ago

samarsreddy commented 3 years ago
<ipython-input-16-f9340b4f7a97> in <module>()
     50 layers = 4
     51 lstm_model = make_lstm(units=25, layers=layers)
---> 52 lmu_model = make_lmu(units=49, layers=layers)
     53 hybrid_model = make_hybrid(units_lstm=25, units_lmu=40, layers=layers)

1 frames

<ipython-input-16-f9340b4f7a97> in delay_layer(units, **kwargs)
     18                        #memory_d=memory_d,
     19                        #hidden_cell=tf.keras.layers.SimpleRNNCell(212),
---> 20                        theta=4,
     21                       ),
     22                return_sequences=True,

TypeError: __init__() missing 2 required positional arguments: 'memory_d' and 'hidden_cell'
]

Here I passed the missing parameters..

def make_lstm(units, layers):
    model = Sequential()
    model.add(LSTM(units,
                   input_shape=(train_X.shape[1], 1),  # (timesteps, input_dims)
                   return_sequences=True))  # continuously outputs per timestep
    for _ in range(layers-1):
        model.add(LSTM(units, return_sequences=True))
    model.add(Dense(train_X.shape[-1], activation='tanh'))
    model.compile(loss="mse", optimizer="adam")
    model.summary()
    return model

def delay_layer(units, **kwargs):
    return RNN(LMUCell(units=units,
                       order=4,
                       memory_d=4,
                       hidden_cell=tf.keras.layers.Layer,
                       #memory_d=memory_d,
                       #hidden_cell=tf.keras.layers.SimpleRNNCell(212),
                       theta=4,
                      ),
               return_sequences=True,
               **kwargs)
def make_lmu(units, layers):
    model = Sequential()
    model.add(delay_layer(units,
                          input_shape=(train_X.shape[1], 1)))  # (timesteps, input_dims)
    for _ in range(layers-1):
        model.add(delay_layer(units))
    model.add(Dense(train_X.shape[-1], activation='linear'))
    model.compile(loss="mse", optimizer="adam")
    model.summary()

    return model

def make_hybrid(units_lstm, units_lmu, layers):
    assert layers == 4, "unsupported"
    model = Sequential()
    model.add(delay_layer(units=units_lmu,input_shape=(train_X.shape[1], 1)))
    model.add(LSTM(units=units_lstm, return_sequences=True))
    model.add(delay_layer(units=units_lmu))
    model.add(LSTM(units=units_lstm, return_sequences=True))
    model.add(Dense(train_X.shape[-1], activation='tanh'))
    model.compile(loss="mse", optimizer="adam")
    model.summary()
    return model

layers = 4
lstm_model = make_lstm(units=25, layers=layers)
lmu_model = make_lmu(units=49, layers=layers) 
hybrid_model = make_hybrid(units_lstm=25, units_lmu=40, layers=layers)

Even after passing out the missing parameters giving another error with units, even I installed the latest keras-lmu but same problem I think because version 0.2.0 removed the units and hidden_activation parameters of LMUCell (these are now specified directly in the hidden_cell. (#22) So please provide the updated codes for mackey-glass prediction problem also. Thanks in advance

TypeError                                 Traceback (most recent call last)

<ipython-input-17-1c880cdb37fc> in <module>()
     50 layers = 4
     51 lstm_model = make_lstm(units=25, layers=layers)
---> 52 lmu_model = make_lmu(units=49, layers=layers)
     53 hybrid_model = make_hybrid(units_lstm=25, units_lmu=40, layers=layers)

6 frames

/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/utils/generic_utils.py in validate_kwargs(kwargs, allowed_kwargs, error_message)
    776   for kwarg in kwargs:
    777     if kwarg not in allowed_kwargs:
--> 778       raise TypeError(error_message, kwarg)
    779 
    780 

TypeError: ('Keyword argument not understood:', 'units')
drasmuss commented 3 years ago

Which Mackey-Glass code are you trying to run? If it's the results from the original LMU paper, that code base is preserved here https://github.com/nengo/keras-lmu/blob/paper/experiments/mackey-glass.ipynb and should continue to run (note that you will need to be on the paper branch of the LMU repo). If it's your own code that you're trying to get running with KerasLMU 0.3.0 I'd recommend coming by the forum and we can help you there, it's just a better format for these kinds of questions.

However, from a brief look at your code, it looks like the problem is here

LMUCell(units=units,
        order=4,
        memory_d=4,
        hidden_cell=tf.keras.layers.Layer,
        #memory_d=memory_d,
        #hidden_cell=tf.keras.layers.SimpleRNNCell(212),
        theta=4,
),

As you said

0.2.0 removed the units and hidden_activation parameters of LMUCell (these are now specified directly in the hidden_cell.

So the units should be specified in the hidden cell, like

LMUCell(order=4,
        memory_d=4,
        hidden_cell=tf.keras.layers.SimpleRNNCell(units=212),
        theta=4,
),

You can see the API of LMUCell here and an example of using the API in practice here.

samarsreddy commented 3 years ago

Thank you very much for replying sir, I read the paper and tried to run the code from here https://github.com/nengo/keras-lmu/blob/paper/experiments/mackey-glass.ipynb in last month on Google colab it ran well without error and in recent because of updated versions of lmu 0.2.0 and 0.3.0 it has given me error in colab as I mentioned there and I tried to change code according to the updated 0.3.0 version but I got some errors that I mentioned before. Now, after your reply I ran the below code.

def make_lstm(units, layers): model = Sequential() model.add(LSTM(units, input_shape=(train_X.shape[1], 1), # (timesteps, input_dims) returnsequences=True)) # continuously outputs per timestep for in range(layers-1): model.add(LSTM(units, return_sequences=True)) model.add(Dense(train_X.shape[-1], activation='tanh')) model.compile(loss="mse", optimizer="adam") model.summary() return model

def delay_layer(units, **kwargs): return RNN(LMUCell(#units=units, order=4, memory_d=4,

hidden_cell=tf.keras.layers.Layer,

memory_d=memory_d,

hidden_cell=tf.keras.layers.SimpleRNNCell(units), theta=4, ), return_sequences=True, **kwargs def make_lmu(units, layers): model = Sequential() model.add(delay_layer(units, input_shape=(train_X.shape[1], 1))) # (timesteps, inputdims) for in range(layers-1): model.add(delay_layer(units)) model.add(Dense(train_X.shape[-1], activation='linear')) model.compile(loss="mse", optimizer="adam") model.summary() return model

def make_hybrid(units_lstm, units_lmu, layers): assert layers == 4, "unsupported" model = Sequential() model.add(delay_layer(units=units_lmu,input_shape=(train_X.shape[1], 1))) model.add(LSTM(units=units_lstm, return_sequences=True)) model.add(delay_layer(units=units_lmu)) model.add(LSTM(units=units_lstm, return_sequences=True)) model.add(Dense(train_X.shape[-1], activation='tanh')) model.compile(loss="mse", optimizer="adam") model.summary() return model

layers = 4 lstm_model = make_lstm(units=25, layers=layers) lmu_model = make_lmu(units=49, layers=layers) hybrid_model = make_hybrid(units_lstm=25, units_lmu=40, layers=layers)

Still I am getting this error..

Model: "sequential_2"


Layer (type) Output Shape Param #

lstm_4 (LSTM) (None, 5000, 25) 2700


lstm_5 (LSTM) (None, 5000, 25) 5100


lstm_6 (LSTM) (None, 5000, 25) 5100


lstm_7 (LSTM) (None, 5000, 25) 5100


dense_1 (Dense) (None, 5000, 1) 26

Total params: 18,026 Trainable params: 18,026 Non-trainable params: 0


WARNING:tensorflow:AutoGraph could not transform <bound method LMUCell.call of <keras_lmu.layers.LMUCell object at 0x7fdc7a04f828>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, export AUTOGRAPH_VERBOSITY=10) and attach the full output. Cause: Unable to locate the source code of <bound method LMUCell.call of <keras_lmu.layers.LMUCell object at 0x7fdc7a04f828>>. Note that functions defined in certain environments, like the interactive Python shell do not expose their source code. If that is the case, you should to define them in a .py source file. If you are certain the code is graph-compatible, wrap the call using @tf.autograph.do_not_convert. Original error: could not get source code To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert WARNING: AutoGraph could not transform <bound method LMUCell.call of <keras_lmu.layers.LMUCell object at 0x7fdc7a04f828>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, export AUTOGRAPH_VERBOSITY=10) and attach the full output. Cause: Unable to locate the source code of <bound method LMUCell.call of <keras_lmu.layers.LMUCell object at 0x7fdc7a04f828>>. Note that functions defined in certain environments, like the interactive Python shell do not expose their source code. If that is the case, you should to define them in a .py source file. If you are certain the code is graph-compatible, wrap the call using @tf.autograph.do_not_convert. Original error: could not get source code To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert Model: "sequential_3"


Layer (type) Output Shape Param #

rnn (RNN) (None, 5000, 49) 3258


rnn_1 (RNN) (None, 5000, 49) 3450


rnn_2 (RNN) (None, 5000, 49) 3450


rnn_3 (RNN) (None, 5000, 49) 3450


dense_2 (Dense) (None, 5000, 1) 50

Total params: 13,658 Trainable params: 13,578 Non-trainable params: 80


Model: "sequential_4"


Layer (type) Output Shape Param #

rnn_4 (RNN) (None, 5000, 40) 2304


lstm_8 (LSTM) (None, 5000, 25) 6600


rnn_5 (RNN) (None, 5000, 40) 2400


lstm_9 (LSTM) (None, 5000, 25) 6600


dense_3 (Dense) (None, 5000, 1) 26

Total params: 17,930 Trainable params: 17,890 Non-trainable params: 40

So please send me the correct code to run it on colab without errors sir i.e updated code(of this https://github.com/nengo/keras-lmu/blob/paper/experiments/mackey-glass.ipynb) as per the updadted versions.

Thanks.

On Mon, Nov 9, 2020 at 7:55 PM Daniel Rasmussen notifications@github.com wrote:

Which Mackey-Glass code are you trying to run? If it's the results from the original LMU paper, that code base is preserved here https://github.com/nengo/keras-lmu/blob/paper/experiments/mackey-glass.ipynb and should continue to run (note that you will need to be on the paper branch of the LMU repo). If it's your own code that you're trying to get running with KerasLMU 0.3.0 I'd recommend coming by the forum https://forum.nengo.ai/ and we can help you there, it's just a better format for these kinds of questions.

However, from a brief look at your code, it looks like the problem is here

LMUCell(units=units, order=4, memory_d=4, hidden_cell=tf.keras.layers.Layer,

memory_d=memory_d,

    #hidden_cell=tf.keras.layers.SimpleRNNCell(212),
    theta=4,

),

As you said

0.2.0 removed the units and hidden_activation parameters of LMUCell (these are now specified directly in the hidden_cell.

So the units should be specified in the hidden cell, like

LMUCell(order=4, memory_d=4, hidden_cell=tf.keras.layers.SimpleRNNCell(units=212), theta=4, ),

You can see the API of LMUCell here https://www.nengo.ai/keras-lmu/api-reference.html#keras_lmu.LMUCell and an example of using the API in practice here https://www.nengo.ai/keras-lmu/examples/psMNIST.html.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/nengo/keras-lmu/issues/25#issuecomment-724045626, or unsubscribe https://github.com/notifications/unsubscribe-auth/AKVLIP6PUSCAZZDRZUXNKMDSO73VDANCNFSM4TNVCCJQ .

drasmuss commented 3 years ago

So please send me the correct code to run it on colab without errors

It shouldn't make any difference whether you're running on Colab or not (it's just a Python environment in any case).

If you just want to recreate the results from the paper (from https://github.com/nengo/keras-lmu/blob/paper/experiments/mackey-glass.ipynb), then as mentioned you should install the paper branch of the LMU repo. You can do this in Colab by adding !pip install git+https://github.com/nengo/keras-lmu@paper at the top of your code. Or, if you wanted to install the 0.1.0 version (which I'm guessing is what you were using before), you can install that by putting !pip install lmu==0.1.0 at the top of your code. In either case, you should then be able to recreate the results from the paper.

If you're wanting to update the code from the paper to work with recent KerasLMU versions, I'd recommend coming by the forum and we can help you there (that's a better format for those kinds of research support questions). I don't see any errors in the output you posted, just looks like you're getting a few warnings (which should be safe to ignore).

samarsreddy commented 3 years ago

Sure sir, thank you very much for the help as you said I tried on colab(!pip install git+https://github.com/nengo/keras-lmu@paper at the top of your code. Or, if you wanted to install the 0.1.0 version (which I'm guessing is what you were using before), you can install that by putting !pip install lmu==0.1.0 at the top of your code) but it returned same only. TypeError: init() missing 2 required positional arguments: 'memory_d' and 'hidden_cell' After that I changed it to this RNN(LMUCell( order=4, memory_d=4, hidden_cell=tf.keras.layers.SimpleRNNCell(units), theta=4, ), It ran with few warnings but could not produce results as said in the paper.

Epoch 1/500 WARNING:tensorflow:AutoGraph could not transform <function Model.make_train_function..train_function at 0x7f4c9313e268> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, export AUTOGRAPH_VERBOSITY=10) and attach the full output. Cause: Unable to locate the source code of <function Model.make_train_function..train_function at 0x7f4c9313e268>. Note that functions defined in certain environments, like the interactive Python shell do not expose their source code. If that is the case, you should to define them in a .py source file. If you are certain the code is graph-compatible, wrap the call using @tf.autograph.do_not_convert. Original error: could not get source code To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert WARNING: AutoGraph could not transform <function Model.make_train_function..train_function at 0x7f4c9313e268> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, export AUTOGRAPH_VERBOSITY=10) and attach the full output. Cause: Unable to locate the source code of <function Model.make_train_function..train_function at 0x7f4c9313e268>. Note that functions defined in certain environments, like the interactive Python shell do not expose their source code. If that is the case, you should to define them in a .py source file. If you are certain the code is graph-compatible, wrap the call using @tf.autograph.do_not_convert. Original error: could not get source code To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert 1/1 [==============================] - ETA: 0s - loss: 0.1381WARNING:tensorflow:AutoGraph could not transform <function Model.make_test_function..test_function at 0x7f4c8f5ac6a8> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, export AUTOGRAPH_VERBOSITY=10) and attach the full output. Cause: Unable to locate the source code of <function Model.make_test_function..test_function at 0x7f4c8f5ac6a8>. Note that functions defined in certain environments, like the interactive Python shell do not expose their source code. If that is the case, you should to define them in a .py source file. If you are certain the code is graph-compatible, wrap the call using @tf.autograph.do_not_convert. Original error: could not get source code To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert WARNING: AutoGraph could not transform <function Model.make_test_function..test_function at 0x7f4c8f5ac6a8> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, export AUTOGRAPH_VERBOSITY=10) and attach the full output. Cause: Unable to locate the source code of <function Model.make_test_function..test_function at 0x7f4c8f5ac6a8>. Note that functions defined in certain environments, like the interactive Python shell do not expose their source code. If that is the case, you should to define them in a .py source file. If you are certain the code is graph-compatible, wrap the call using @tf.autograph.do_not_convert. Original error: could not get source code To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert

here lstm performance than lmu and hybrid I think this is because some version differences or mistakes in codes as per new versions only, I want to run this mackey-glass time series prediction to see lmu is best than lstm and apply this to my problem, but due to version changes I am strugglinh a lot to implement and proceed furthur that's why I asked you to send me the updated mackey-glass codes where below part of code is changed as per 0.3.0:

def make_lstm(units, layers): model = Sequential() model.add(LSTM(units, input_shape=(train_X.shape[1], 1), # (timesteps, input_dims) returnsequences=True)) # continuously outputs per timestep for in range(layers-1): model.add(LSTM(units, return_sequences=True)) model.add(Dense(train_X.shape[-1], activation='tanh')) model.compile(loss="mse", optimizer="adam") model.summary() return model

def delay_layer(units, kwargs): return RNN(LMUCell(units=units, order=4, theta=4, ), return_sequences=True, kwargs)

def make_lmu(units, layers): model = Sequential() model.add(delay_layer(units, input_shape=(train_X.shape[1], 1))) # (timesteps, inputdims) for in range(layers-1): model.add(delay_layer(units)) model.add(Dense(train_X.shape[-1], activation='linear')) model.compile(loss="mse", optimizer="adam") model.summary()

return model

def make_hybrid(units_lstm, units_lmu, layers): assert layers == 4, "unsupported" model = Sequential() model.add(delay_layer(units=units_lmu,input_shape=(train_X.shape[1], 1))) model.add(LSTM(units=units_lstm, return_sequences=True)) model.add(delay_layer(units=units_lmu)) model.add(LSTM(units=units_lstm, return_sequences=True)) model.add(Dense(train_X.shape[-1], activation='tanh')) model.compile(loss="mse", optimizer="adam") model.summary() return model

layers = 4lstm_model = make_lstm(units=25, layers=layers)lmu_model = make_lmu(units=49, layers=layers) hybrid_model = make_hybrid(units_lstm=25, units_lmu=40, layers=layers)

Thanks much for the help till now, if possible please make the changes as per updated versions and send me so that I can run on colab producing same results as in paper.Yeah sure I will post my questions in forum furthur. Thanks a lot.

On Wed, Nov 11, 2020 at 7:20 PM Daniel Rasmussen notifications@github.com wrote:

So please send me the correct code to run it on colab without errors

It shouldn't make any difference whether you're running on Colab or not (it's just a Python environment in any case).

If you just want to recreate the results from the paper (from https://github.com/nengo/keras-lmu/blob/paper/experiments/mackey-glass.ipynb), then as mentioned you should install the paper branch of the LMU repo. You can do this in Colab by adding !pip install git+ https://github.com/nengo/keras-lmu@paper at the top of your code. Or, if you wanted to install the 0.1.0 version (which I'm guessing is what you were using before), you can install that by putting !pip install lmu==0.1.0 at the top of your code. In either case, you should then be able to recreate the results from the paper.

If you're wanting to update the code from the paper to work with recent KerasLMU versions, I'd recommend coming by the forum https://forum.nengo.ai/ and we can help you there (that's a better format for those kinds of research support questions). I don't see any errors in the output you posted, just looks like you're getting a few warnings (which should be safe to ignore).

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/nengo/keras-lmu/issues/25#issuecomment-725433891, or unsubscribe https://github.com/notifications/unsubscribe-auth/AKVLIPYZRMQPVNB5J4DAI63SPKJBVANCNFSM4TNVCCJQ .

drasmuss commented 3 years ago

Sure sir, thank you very much for the help as you said I tried on colab(!pip install git+https://github.com/nengo/keras-lmu@paper at the top of your code. Or, if you wanted to install the 0.1.0 version (which I'm guessing is what you were using before), you can install that by putting !pip install lmu==0.1.0 at the top of your code) but it returned same only.

That means that it did not install correctly. To be clear, you need to have a line at the top of your notebook that looks like this:

Capture

That will install the 0.1.0 version, and you will not get that error.

samarsreddy commented 3 years ago

Thank you very much I did it the same way only now I didn't any warnings and errors and I got results in this way(attached the screenshot) which are some what differ than in the https://github.com/nengo/keras-lmu/blob/paper/experiments/mackey-glass.ipynb here. I am majorly looking forward your guidance when I apply LMU(especially tuning parameters) to my problem to get best accuracy(or less rmse) than lstm, that I will post on forum. Thanks again.

On Thu, Nov 12, 2020 at 7:39 PM Daniel Rasmussen notifications@github.com wrote:

Sure sir, thank you very much for the help as you said I tried on colab(!pip install git+https://github.com/nengo/keras-lmu@paper at the top of your code. Or, if you wanted to install the 0.1.0 version (which I'm guessing is what you were using before), you can install that by putting !pip install lmu==0.1.0 at the top of your code) but it returned same only.

That means that it did not install correctly. To be clear, you need to have a line at the top of your notebook that looks like this: [image: Capture] https://user-images.githubusercontent.com/1952220/98950037-dfb55d00-24ce-11eb-8087-50bf0a07ebe6.PNG That will install the 0.1.0 version, and you will not get that error.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/nengo/keras-lmu/issues/25#issuecomment-726100421, or unsubscribe https://github.com/notifications/unsubscribe-auth/AKVLIP34PYJZIEJ25J5AO3TSPPUATANCNFSM4TNVCCJQ .