polarwatch / internship24

PolarWatch internship
0 stars 0 forks source link

Milestone 4a: successfully set up tensorflow ML platform and modify the model #12

Closed shospital closed 1 month ago

shospital commented 1 month ago
shospital commented 1 month ago

Attention.__call__() got multiple values for argument 'training'

Arguments received by Functional.call():
  • inputs=tf.Tensor(shape=(None, 1, 11), dtype=float32)
  • training=True
  • mask=None

@tntly it seems like the tensorflow.keras functional() is causing the error. The argument being passed to tensorflow from Attention conflicts with already existing argument. I looked at few stackflow and some says uninstall and reinstall tensorflow. I will try to see if i can uninstall it and still get the same issue

shospital commented 1 month ago

@tntly RE: the issues you are having: Since Attention._call() returns training argument that conflicts, let's not use Attention module and use Attention module from tensorflow.kearas.layer

the original is

from tensorflow.keras.layers import Dense, LSTM

model_input = Input(shape=(timestep,features))
x = LSTM(64, return_sequences=True)(model_input)
x = Dropout(0.2)(x)
x = LSTM(32, return_sequences=True)(x)
x = LSTM(16, return_sequences=True)(x)
x = LSTM(16, return_sequences=True)(x)
x = Attention(trainable = True)(x)
x = Dropout(0.2)(x)
x = Dense(32)(x)
x = Dense(16)(x)
x = Dense(1)(x)
model = Model(model_input, x)
#model.compile(loss='mae', optimizer='adam')
print(model.summary())

modified code is


from tensorflow.keras.layers import Dense, LSTM, Attention, Flatten, Concatenate

model_input = Input(shape=(timestep,features))
x = LSTM(64, return_sequences=True)(model_input)
x = Dropout(0.2)(x)
x = LSTM(32, return_sequences=True)(x)
x = LSTM(16, return_sequences=True)(x)
x = LSTM(16, return_sequences=True)(x)

# add attention layer
attention = Attention()([x, x])

# combining LSTM output x with attention output along feature dimension
context_vector = Concatenate(axis=-1)([x, attention])
flattened_context = Flatten()(context_vector)
# dropping vars before passing it off to the next layer
x = Dropout(0.2)(flattened_context)
x = Dense(32)(x)
x = Dense(16)(x)
x = Dense(1)(x)
model = Model(model_input, x)
#model.compile(loss='mae', optimizer='adam')
print(model.summary())

``