nengo / keras-lmu

Keras implementation of Legendre Memory Units
https://www.nengo.ai/keras-lmu/
Other
209 stars 35 forks source link

AttributeError: 'LMUFFT' object has no attribute 'kernel' #27

Closed NITROGENousFish closed 3 years ago

NITROGENousFish commented 3 years ago

Hello,I'm a beginner in tensorflow2,and recently I'm doing some stuff on "Time Series Forcasting" I've read your paper, and simply want to replace your implimentation with tf.keras.layers.LSTM

But when I'm testing function keras_lmu.LMU , I got such errors:

AttributeError: in user code:

    C:\ProgramData\Miniconda3\lib\site-packages\tensorflow\python\keras\engine\training.py:806 train_function  *
        return step_function(self, iterator)
    C:\ProgramData\Miniconda3\lib\site-packages\keras_lmu\layers.py:439 call  *
        return self.fft_layer.call(inputs, training=training)
    C:\ProgramData\Miniconda3\lib\site-packages\keras_lmu\layers.py:619 call  *
        u = tf.matmul(inputs, self.kernel, name="input_encoder_mult")

    AttributeError: 'LMUFFT' object has no attribute 'kernel'

model.summary() still goes well, so I think the model build successfully but unsuccessfully initial weight? May be this line did not run? https://github.com/nengo/keras-lmu/blob/master/keras_lmu/layers.py#L153

My Testing code

import tensorflow as tf
import tensorflow.keras as keras
############################################## from your doc
import keras_lmu
from tensorflow.keras import Input, Model
from tensorflow.keras.layers import Dense
lmu_layer = keras_lmu.LMU(
    memory_d=1,
    order=256,
    theta=784,
    hidden_cell=tf.keras.layers.SimpleRNNCell(units=10),
)

inputs = Input((None, 10))
lmus = lmu_layer(inputs)
outputs = Dense(1)(lmus)

model = Model(inputs=inputs, outputs=outputs)
#################################################### from your doc

model.summary()

x_train = tf.ones((5,5,10))
y_train = tf.ones((5,5,10))
x_test = tf.ones((1,))
y_test = tf.ones((1,))
model.compile(
    loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
    optimizer=keras.optimizers.RMSprop(),
    metrics=["accuracy"],
)

history = model.fit(x_train, y_train, epochs=2, validation_split=0.2)

test_scores = model.evaluate(x_test, y_test, verbose=2)
print("Test loss:", test_scores[0])
print("Test accuracy:", test_scores[1])

My python version is 3.8.3

tensorflow version is 2.3.1

keras-lmu version is 0.3.0

Am I writting these code correct? , or is there a bug in keras_lmu\layers.py?

THANKS!!!

drasmuss commented 3 years ago

That is indeed a bug, thanks for finding it for us! There's a fix up now in #28, and we'll do a quick patch release as well once that is merged in.

Unrelated to that bug, there were a few errors in your test code (related to the shapes of inputs and outputs). Here is a version that should work (once the fix from #28 is applied).

import tensorflow as tf
import tensorflow.keras as keras

############################################## from your doc
import keras_lmu
from tensorflow.keras import Input, Model
from tensorflow.keras.layers import Dense

lmu_layer = keras_lmu.LMU(
    memory_d=1,
    order=256,
    theta=784,
    hidden_cell=tf.keras.layers.SimpleRNNCell(units=10),
)

inputs = Input((None, 10))
lmus = lmu_layer(inputs)
outputs = Dense(2)(lmus)

model = Model(inputs=inputs, outputs=outputs)
#################################################### from your doc

model.summary()

x_train = tf.ones((5, 5, 10))
x_test = tf.ones((5, 5, 10))
y_train = tf.ones((5, 1))
y_test = tf.ones((5, 1))
model.compile(
    loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
    optimizer=keras.optimizers.RMSprop(),
    metrics=["accuracy"],
)

history = model.fit(x_train, y_train, epochs=2, validation_split=0.2)

test_scores = model.evaluate(x_test, y_test, verbose=2)
print("Test loss:", test_scores[0])
print("Test accuracy:", test_scores[1])
drasmuss commented 3 years ago

We've released KerasLMU 0.3.1 now, which contains this fix. Thanks again for the bug report.