Open sebplorenz opened 1 month ago
can only concatenate tuple (not "TrackedList") to tuple
Arguments received by Lambda.call():
• args=('<KerasTensor shape=(None, 48, 48, 1), dtype=float32, sparse=False, name=input_layer_4>',)
• kwargs={'mask': 'None'}``
I have model which works on previous version of keras but due to gpu access constraints, I need to stick with keras 3x and tf 2.16 TBH ,I am naive . Help me resolve the problem. This is my model which i saved but while loading it gives problem:
`
def create_model(norm, start_filters=32):
ir069 = tf.keras.layers.Input(shape=(192, 192, 1))
ir107 = tf.keras.layers.Input(shape=(192, 192, 1))
lght = tf.keras.layers.Input(shape=(48, 48, 1))
inputs = [ir069, ir107, lght]
# Normalize inputs
ir069_norm = tf.keras.layers.Lambda(lambda x, mu, scale: (x - mu) / scale,
arguments={'mu': norm['ir069']['shift'], 'scale': norm['ir069']['scale']},
output_shape=(192, 192, 1))(ir069)
ir107_norm = tf.keras.layers.Lambda(lambda x, mu, scale: (x - mu) / scale,
arguments={'mu': norm['ir107']['shift'], 'scale': norm['ir107']['scale']},
output_shape=(192, 192, 1))(ir107)
lght_norm = tf.keras.layers.Lambda(lambda x, mu, scale: (x - mu) / scale,
arguments={'mu': norm['lght']['shift'], 'scale': norm['lght']['scale']},
output_shape=(48, 48, 1))(lght)
# Reshape lght into 192
lght_res = tf.keras.layers.Lambda(lambda t: tf.image.resize(t, (192, 192)), output_shape=(192, 192, 1))(lght_norm)
# Concatenate all inputs
x_inp = tf.keras.layers.Concatenate(axis=-1)([ir069_norm, ir107_norm, lght_res])
encoder0_pool, encoder0 = encoder_block(x_inp, start_filters)
encoder1_pool, encoder1 = encoder_block(encoder0_pool, start_filters * 2)
encoder2_pool, encoder2 = encoder_block(encoder1_pool, start_filters * 4)
encoder3_pool, encoder3 = encoder_block(encoder2_pool, start_filters * 8)
center = conv_block(encoder3_pool, start_filters * 32)
decoder3 = decoder_block(center, encoder3, start_filters * 8)
decoder2 = decoder_block(decoder3, encoder2, start_filters * 6)
decoder1 = decoder_block(decoder2, encoder1, start_filters * 4)
decoder0 = decoder_block(decoder1, encoder0, start_filters * 2)
decoder00 = decoder_block(decoder0, None, start_filters)
output = layers.Conv2D(1, (1, 1), padding='same', activation='linear', name='output_layer')(decoder00)
return inputs , output`
Hi, I'm trying to save and load the model from this example: https://keras.io/examples/rl/deep_q_network_breakout/
Saving the model works. When I load the model I'm getting the following error:
I've created a small script to reproduce: