PINTO0309 / onnx2tf

Self-Created Tools to convert ONNX files (NCHW) to TensorFlow/TFLite/Keras format (NHWC). The purpose of this tool is to solve the massive Transpose extrapolation problem in onnx-tensorflow (onnx-tf). I don't need a Star, but give me a pull request.
MIT License
662 stars 65 forks source link

Custom LSTM PyTorch model Full integer quantization problem #649

Closed karen-ca closed 2 months ago

karen-ca commented 3 months ago

Issue Type

Others

OS

Linux

onnx2tf version number

1.20.0

onnx version number

1.16.1

onnxruntime version number

1.17.1

onnxsim (onnx_simplifier) version number

0.4.33

tensorflow version number

2.13.0

Download URL for ONNX

Parameter Replacement JSON

I don't have a Replacement JSON

Description

  1. We are company using raspberry pi 5 + coral tpu to run models for our main product. If we can not get this full integer quantized model we can not map it to coral tpu and our project will fail. Could you help us, please? Thank you!

  2. I am trying to quantize my ActivityAI4.onnx model to full integer quantized model using the following command onnx2tf -i exports/onnx/ActivityAI4.onnx -oiqt -cind input gen/st.npy [0.40510672] [0.18647602] my model input size is (36,17,2), st.npy shape is (5,36,17,2). It is holding numpy array with stacked 5 arrays (36,17,2). The mean and std I calculated with the following commands np.load('gen/st.npy').mean() np.load('gen/st.npy').std()

This is giving me the following error:

onnx2tf -i exports/onnx/ActivityAI4.onnx  -oiqt -cind input gen/st.npy [0.40510672] [0.18647602]

Model optimizing started ============================================================
Simplifying...
Finish! Here is the difference:
┏━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━┓
┃            ┃ Original Model ┃ Simplified Model ┃
┡━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━┩
│ Add        │ 3              │ 3                │
│ Constant   │ 19             │ 19               │
│ Gemm       │ 2              │ 2                │
│ LSTM       │ 2              │ 2                │
│ LeakyRelu  │ 4              │ 4                │
│ MatMul     │ 3              │ 3                │
│ Reshape    │ 2              │ 2                │
│ Softmax    │ 1              │ 1                │
│ Squeeze    │ 2              │ 2                │
│ Transpose  │ 2              │ 2                │
│ Unsqueeze  │ 1              │ 1                │
│ Model Size │ 3.7MiB         │ 3.7MiB           │
└────────────┴────────────────┴──────────────────┘

Simplifying...
Finish! Here is the difference:
┏━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━┓
┃            ┃ Original Model ┃ Simplified Model ┃
┡━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━┩
│ Add        │ 3              │ 3                │
│ Constant   │ 19             │ 19               │
│ Gemm       │ 2              │ 2                │
│ LSTM       │ 2              │ 2                │
│ LeakyRelu  │ 4              │ 4                │
│ MatMul     │ 3              │ 3                │
│ Reshape    │ 2              │ 2                │
│ Softmax    │ 1              │ 1                │
│ Squeeze    │ 2              │ 2                │
│ Transpose  │ 2              │ 2                │
│ Unsqueeze  │ 1              │ 1                │
│ Model Size │ 3.7MiB         │ 3.7MiB           │
└────────────┴────────────────┴──────────────────┘

Simplifying...
Finish! Here is the difference:
┏━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━┓
┃            ┃ Original Model ┃ Simplified Model ┃
┡━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━┩
│ Add        │ 3              │ 3                │
│ Constant   │ 19             │ 19               │
│ Gemm       │ 2              │ 2                │
│ LSTM       │ 2              │ 2                │
│ LeakyRelu  │ 4              │ 4                │
│ MatMul     │ 3              │ 3                │
│ Reshape    │ 2              │ 2                │
│ Softmax    │ 1              │ 1                │
│ Squeeze    │ 2              │ 2                │
│ Transpose  │ 2              │ 2                │
│ Unsqueeze  │ 1              │ 1                │
│ Model Size │ 3.7MiB         │ 3.7MiB           │
└────────────┴────────────────┴──────────────────┘

Model optimizing complete!

Automatic generation of each OP name started ========================================
Automatic generation of each OP name complete!

Model loaded ========================================================================

Model conversion started ============================================================
INFO: input_op_name: input shape: [36, 17, 2] dtype: float32
WARNING: The optimization process for shape estimation is skipped because it contains OPs that cannot be inferred by the standard onnxruntime.
WARNING: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Failed to load model with error: /onnxruntime_src/onnxruntime/core/graph/model.cc:179 onnxruntime::Model::Model(onnx::ModelProto&&, const onnxruntime::PathString&, const onnxruntime::IOnnxRuntimeOpSchemaRegistryList*, const onnxruntime::logging::Logger&, const onnxruntime::ModelOptions&) Unsupported model IR version: 10, max supported IR version: 9

INFO: 2 / 23
INFO: onnx_op_type: Unsqueeze onnx_op_name: wa/Unsqueeze
INFO:  input_name.1: input shape: [36, 17, 2] dtype: float32
INFO:  output_name.1: wa/Unsqueeze_output_0 shape: [1, 36, 17, 2] dtype: float32
INFO: tf_op_type: reshape
INFO:  input.1.tensor: name: input shape: (36, 2, 17) dtype: <dtype: 'float32'> 
INFO:  input.2.shape: val: [1, 36, 2, 17] 
INFO:  output.1.output: name: tf.reshape/Reshape:0 shape: (1, 36, 2, 17) dtype: <dtype: 'float32'> 

...

INFO: 23 / 23
INFO: onnx_op_type: Softmax onnx_op_name: wa/softmax/Softmax
INFO:  input_name.1: wa/d5/Gemm_output_0 shape: [1, 4] dtype: float32
INFO:  output_name.1: output shape: [1, 4] dtype: float32
INFO: tf_op_type: softmax_v2
INFO:  input.1.logits: name: tf.__operators__.add_1/AddV2:0 shape: (1, 4) dtype: <dtype: 'float32'> 
INFO:  input.2.axis: val: 1 
INFO:  output.1.output: name: tf.nn.softmax/wa/softmax/Softmax:0 shape: (1, 4) dtype: <dtype: 'float32'> 

saved_model output started ==========================================================
saved_model output complete!
Float32 tflite output complete!
Float16 tflite output complete!
Input signature information for quantization
signature_name: serving_default
input_name.0: input shape: (36, 2, 17) dtype: <dtype: 'float32'>
Dynamic Range Quantization tflite output complete!
fully_quantize: 0, inference_type: 6, input_inference_type: FLOAT32, output_inference_type: FLOAT32
INT8 Quantization tflite output complete!
fully_quantize: 0, inference_type: 6, input_inference_type: INT8, output_inference_type: INT8
Full INT8 Quantization tflite output complete!
Traceback (most recent call last):
  File "/home/server3090ti/Projects/sicherov2/.env/lib/python3.9/site-packages/onnx2tf/onnx2tf.py", line 1520, in convert
    tflite_model = converter.convert()
  File "/home/server3090ti/Projects/sicherov2/.env/lib/python3.9/site-packages/tensorflow/lite/python/lite.py", line 1065, in wrapper
    return self._convert_and_export_metrics(convert_func, *args, **kwargs)
  File "/home/server3090ti/Projects/sicherov2/.env/lib/python3.9/site-packages/tensorflow/lite/python/lite.py", line 1042, in _convert_and_export_metrics
    result = convert_func(self, *args, **kwargs)
  File "/home/server3090ti/Projects/sicherov2/.env/lib/python3.9/site-packages/tensorflow/lite/python/lite.py", line 1390, in convert
    return self._convert_from_saved_model(graph_def)
  File "/home/server3090ti/Projects/sicherov2/.env/lib/python3.9/site-packages/tensorflow/lite/python/lite.py", line 1257, in _convert_from_saved_model
    return self._optimize_tflite_model(
  File "/home/server3090ti/Projects/sicherov2/.env/lib/python3.9/site-packages/tensorflow/lite/python/convert_phase.py", line 215, in wrapper
    raise error from None  # Re-throws the exception.
  File "/home/server3090ti/Projects/sicherov2/.env/lib/python3.9/site-packages/tensorflow/lite/python/convert_phase.py", line 205, in wrapper
    return func(*args, **kwargs)
  File "/home/server3090ti/Projects/sicherov2/.env/lib/python3.9/site-packages/tensorflow/lite/python/lite.py", line 991, in _optimize_tflite_model
    model = self._quantize(
  File "/home/server3090ti/Projects/sicherov2/.env/lib/python3.9/site-packages/tensorflow/lite/python/lite.py", line 729, in _quantize
    return calibrate_quantize.calibrate_and_quantize(
  File "/home/server3090ti/Projects/sicherov2/.env/lib/python3.9/site-packages/tensorflow/lite/python/convert_phase.py", line 215, in wrapper
    raise error from None  # Re-throws the exception.
  File "/home/server3090ti/Projects/sicherov2/.env/lib/python3.9/site-packages/tensorflow/lite/python/convert_phase.py", line 205, in wrapper
    return func(*args, **kwargs)
  File "/home/server3090ti/Projects/sicherov2/.env/lib/python3.9/site-packages/tensorflow/lite/python/optimize/calibrator.py", line 194, in calibrate_and_quantize
    return self._calibrator.QuantizeModel(
RuntimeError: Max and min for dynamic tensors should be recorded during calibration: Failed for tensor arg1
Empty min/max for tensor arg1

WARNING: INT8 Quantization with int16 activations tflite output failed.
Traceback (most recent call last):
  File "/home/server3090ti/Projects/sicherov2/.env/lib/python3.9/site-packages/onnx2tf/onnx2tf.py", line 1551, in convert
    tflite_model = converter.convert()
  File "/home/server3090ti/Projects/sicherov2/.env/lib/python3.9/site-packages/tensorflow/lite/python/lite.py", line 1065, in wrapper
    return self._convert_and_export_metrics(convert_func, *args, **kwargs)
  File "/home/server3090ti/Projects/sicherov2/.env/lib/python3.9/site-packages/tensorflow/lite/python/lite.py", line 1042, in _convert_and_export_metrics
    result = convert_func(self, *args, **kwargs)
  File "/home/server3090ti/Projects/sicherov2/.env/lib/python3.9/site-packages/tensorflow/lite/python/lite.py", line 1390, in convert
    return self._convert_from_saved_model(graph_def)
  File "/home/server3090ti/Projects/sicherov2/.env/lib/python3.9/site-packages/tensorflow/lite/python/lite.py", line 1257, in _convert_from_saved_model
    return self._optimize_tflite_model(
  File "/home/server3090ti/Projects/sicherov2/.env/lib/python3.9/site-packages/tensorflow/lite/python/convert_phase.py", line 215, in wrapper
    raise error from None  # Re-throws the exception.
  File "/home/server3090ti/Projects/sicherov2/.env/lib/python3.9/site-packages/tensorflow/lite/python/convert_phase.py", line 205, in wrapper
    return func(*args, **kwargs)
  File "/home/server3090ti/Projects/sicherov2/.env/lib/python3.9/site-packages/tensorflow/lite/python/lite.py", line 991, in _optimize_tflite_model
    model = self._quantize(
  File "/home/server3090ti/Projects/sicherov2/.env/lib/python3.9/site-packages/tensorflow/lite/python/lite.py", line 729, in _quantize
    return calibrate_quantize.calibrate_and_quantize(
  File "/home/server3090ti/Projects/sicherov2/.env/lib/python3.9/site-packages/tensorflow/lite/python/convert_phase.py", line 215, in wrapper
    raise error from None  # Re-throws the exception.
  File "/home/server3090ti/Projects/sicherov2/.env/lib/python3.9/site-packages/tensorflow/lite/python/convert_phase.py", line 205, in wrapper
    return func(*args, **kwargs)
  File "/home/server3090ti/Projects/sicherov2/.env/lib/python3.9/site-packages/tensorflow/lite/python/optimize/calibrator.py", line 194, in calibrate_and_quantize
    return self._calibrator.QuantizeModel(
RuntimeError: Max and min for dynamic tensors should be recorded during calibration: Failed for tensor arg1
Empty min/max for tensor arg1

WARNING: Full INT8 Quantization with int16 activations tflite output failed.
  1. I tried to convert to .pb tensorflow graph and quantize using this script, I got full integer quantized model but it can not to map to edgetpu.

Here is quantization script

import tensorflow as tf
import numpy as np
import os

def representative_dataset_gen():
    # Define the path to the gen folder
    gen_folder_path = 'gen'

    # Get a list of all .npy files in the gen folder
    file_list = [os.path.join(gen_folder_path, f) for f in os.listdir(gen_folder_path) if f.endswith('.npy')]

    for file_path in file_list:
        # Load the NumPy array from the file
        numpy_array = np.load(file_path)
        numpy_array = np.reshape(numpy_array, (36,2,17))

        # Convert the NumPy array to a TensorFlow tensor
        tensorflow_tensor = tf.convert_to_tensor(numpy_array)

        # Yield the tensor wrapped in a dictionary with 'input' key, suitable for TensorFlow model
        yield [tensorflow_tensor]

converter = tf.lite.TFLiteConverter.from_saved_model('./exports/ActivityAI4_saved_model')
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.target_spec.supported_ops = [tf.lite.OpsSet.EXPERIMENTAL_TFLITE_BUILTINS_ACTIVATIONS_INT16_WEIGHTS_INT8,
tf.lite.OpsSet.TFLITE_BUILTINS]
converter.target_spec.supported_types = [tf.int8]
converter.representative_dataset = representative_dataset_gen
tflite_quant_model = converter.convert()

with open("ActivityAI4_full_integer_quant.tflite", "wb") as f:
    f.write(tflite_quant_model)

Here is not mapping error

edgetpu_compiler exports/quantization/ActivityAI4_full_integer_quant.tflite 
Edge TPU Compiler version 16.0.384591198
Started a compilation timeout timer of 180 seconds.
ERROR: Attempting to use a delegate that only supports static-sized tensors with a graph that has dynamic-sized tensors.
Compilation failed: Model failed in Tflite interpreter. Please ensure model can be loaded/run in Tflite interpreter.
Compilation child process completed within timeout period.
Compilation failed!
  1. Our company needs this model to finish main product
  2. This is our custom model trained on custom data
  3. Your github help me a lot that I already transfer a lot of models from onnx to tflite and even quantize them and they perfectly fit on edgetpu. Thank you for your contribution! Could you help us, please? Screenshot 2024-06-07 at 11 39 45
PINTO0309 commented 3 months ago

I was traveling for work far away yesterday and could not reply.

  1. Cannot access Google Drive.
  2. Tell me what type of input data (36,17,2) is. No one but you knows whether NCW or NWC or others is the correct answer. Is it image, audio, or text? I cannot do a survey unless I know what the data means and the correct dimensional arrangement.
  3. What exactly is the type of model you want to use, the model with INT16 activation or the model with UINT8 activation? Depending on the answer to this question, the outcome of the response will depend on whether it can or cannot be addressed. The following warning message is due to a poor TensorFlow implementation. Therefore, it is not an onnx2tf problem.

    RuntimeError: Max and min for dynamic tensors should be recorded during calibration: Failed for tensor arg1
    Empty min/max for tensor arg1
    
    WARNING: Full INT8 Quantization with int16 activations tflite output failed.

    If it does not have to be an INT16 activation, then the logic below makes no sense.

    converter.target_spec.supported_ops = \
    [
        tf.lite.OpsSet.EXPERIMENTAL_TFLITE_BUILTINS_ACTIVATIONS_INT16_WEIGHTS_INT8, # <--- here
        tf.lite.OpsSet.TFLITE_BUILTINS
    ]
  4. I have no idea what the real cause of the error is until I see ONNX, but this error that edgetpu-compilier is outputting is most likely a edgetpu-compilier problem. While onnx2tf was originally designed to avoid such edgetpu-compilier errors as much as possible, there is still a possibility that I may need an unknown workaround, and in the worst case, a small workaround may not be able to handle it.
    ERROR: Attempting to use a delegate that only supports static-sized tensors with a graph that has dynamic-sized tensors.
  5. The version of your associated package is quite old. Make sure to update your package version before working on it. The package version you are using has a fatal bug that corrupts the ONNX files. https://github.com/PINTO0309/onnx2tf?tab=readme-ov-file#1-install
    pip install -U simple_onnx_processing_tools \
    && pip install -U sne4onnx>=1.0.13 \
    && pip install -U sng4onnx>=1.0.4 \
    && pip install -U onnx2tf \
    && pip install -U tensorflow==2.16.1

    Related error messages. The onnxruntime is aborting.

    WARNING: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Failed to load model with error: /onnxruntime_src/onnxruntime/core/graph/model.cc:179 onnxruntime::Model::Model(onnx::ModelProto&&, const onnxruntime::PathString&, const onnxruntime::IOnnxRuntimeOpSchemaRegistryList*, const onnxruntime::logging::Logger&, const onnxruntime::ModelOptions&) Unsupported model IR version: 10, max supported IR version: 9

Outdated versions of packages lead to inducing a number of problems that are different from the essential problems. Therefore, it is recommended that the latest package with numerous bug fixes be used before beginning to investigate and address the problem.

The issue you shared with us still lacks quite a bit of information needed to begin the investigation.

karen-ca commented 2 months ago

I am sorry for late answer.

  1. I'm sorry about that, i updated permissions now you can download onnx model.
  2. my data type is skeleton cache from continuous 36 frames, every skeleton has 17 keypoints with coordinates [x,y], this is my input which shape is (36,17,2) (frames, keypoints, [x,y]).
  3. I need full integer quantization model to map on edgetpu, but i think that this quantized model is not correct one, because i perfectly mapped other models on edgetpu using onnx2tf with flag -oiqt.
  4. After updating all packages now i am getting segmentation fault
    
    onnx2tf -i exports/onnx/ActivityAI4.onnx  -oiqt -cind input gen/st.npy [0.40510672] [0.18647602]

Model optimizing started ============================================================ Simplifying... Finish! Here is the difference: ┏━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━┓ ┃ ┃ Original Model ┃ Simplified Model ┃ ┡━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━┩ │ Add │ 3 │ 3 │ │ Constant │ 19 │ 19 │ │ Gemm │ 2 │ 2 │ │ LSTM │ 2 │ 2 │ │ LeakyRelu │ 4 │ 4 │ │ MatMul │ 3 │ 3 │ │ Reshape │ 2 │ 2 │ │ Softmax │ 1 │ 1 │ │ Squeeze │ 2 │ 2 │ │ Transpose │ 2 │ 2 │ │ Unsqueeze │ 1 │ 1 │ │ Model Size │ 3.7MiB │ 3.7MiB │ └────────────┴────────────────┴──────────────────┘

Simplifying... Finish! Here is the difference: ┏━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━┓ ┃ ┃ Original Model ┃ Simplified Model ┃ ┡━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━┩ │ Add │ 3 │ 3 │ │ Constant │ 19 │ 19 │ │ Gemm │ 2 │ 2 │ │ LSTM │ 2 │ 2 │ │ LeakyRelu │ 4 │ 4 │ │ MatMul │ 3 │ 3 │ │ Reshape │ 2 │ 2 │ │ Softmax │ 1 │ 1 │ │ Squeeze │ 2 │ 2 │ │ Transpose │ 2 │ 2 │ │ Unsqueeze │ 1 │ 1 │ │ Model Size │ 3.7MiB │ 3.7MiB │ └────────────┴────────────────┴──────────────────┘

Simplifying... Finish! Here is the difference: ┏━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━┓ ┃ ┃ Original Model ┃ Simplified Model ┃ ┡━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━┩ │ Add │ 3 │ 3 │ │ Constant │ 19 │ 19 │ │ Gemm │ 2 │ 2 │ │ LSTM │ 2 │ 2 │ │ LeakyRelu │ 4 │ 4 │ │ MatMul │ 3 │ 3 │ │ Reshape │ 2 │ 2 │ │ Softmax │ 1 │ 1 │ │ Squeeze │ 2 │ 2 │ │ Transpose │ 2 │ 2 │ │ Unsqueeze │ 1 │ 1 │ │ Model Size │ 3.7MiB │ 3.7MiB │ └────────────┴────────────────┴──────────────────┘

Model optimizing complete!

Automatic generation of each OP name started ======================================== Automatic generation of each OP name complete!

Model loaded ========================================================================

Model conversion started ============================================================ INFO: input_op_name: input shape: [36, 17, 2] dtype: float32 WARNING: The optimization process for shape estimation is skipped because it contains OPs that cannot be inferred by the standard onnxruntime. WARNING: axes don't match array

INFO: 2 / 23 INFO: onnx_op_type: Unsqueeze onnx_op_name: wa/Unsqueeze INFO: input_name.1: input shape: [36, 17, 2] dtype: float32 INFO: output_name.1: wa/Unsqueeze_output_0 shape: [1, 36, 17, 2] dtype: float32 INFO: tf_op_type: reshape INFO: input.1.tensor: name: input shape: (36, 2, 17) dtype: <dtype: 'float32'> INFO: input.2.shape: val: [1, 36, 2, 17] INFO: output.1.output: name: tf.reshape/Reshape:0 shape: (1, 36, 2, 17) dtype: <dtype: 'float32'>

...

INFO: 23 / 23 INFO: onnx_op_type: Softmax onnx_op_name: wa/softmax/Softmax INFO: input_name.1: wa/d5/Gemm_output_0 shape: [1, 4] dtype: float32 INFO: output_name.1: output shape: [1, 4] dtype: float32 INFO: tf_op_type: softmax_v2 INFO: input.1.logits: name: tf.operators.add_1/AddV2:0 shape: (1, 4) dtype: <dtype: 'float32'> INFO: input.2.axis: val: 1 INFO: output.1.output: name: tf.nn.softmax/wa/softmax/Softmax:0 shape: (1, 4) dtype: <dtype: 'float32'>

saved_model output started ========================================================== saved_model output complete! WARNING: All log messages before absl::InitializeLog() is called are written to STDERR W0000 00:00:1717996987.345648 14492 tf_tfl_flatbuffer_helpers.cc:390] Ignored output_format. W0000 00:00:1717996987.345676 14492 tf_tfl_flatbuffer_helpers.cc:393] Ignored drop_control_dependency. Float32 tflite output complete! W0000 00:00:1717996987.564488 14492 tf_tfl_flatbuffer_helpers.cc:390] Ignored output_format. W0000 00:00:1717996987.564509 14492 tf_tfl_flatbuffer_helpers.cc:393] Ignored drop_control_dependency. Float16 tflite output complete! Input signature information for quantization signature_name: serving_default input_name.0: input shape: (36, 2, 17) dtype: <dtype: 'float32'> W0000 00:00:1717996988.553677 14492 tf_tfl_flatbuffer_helpers.cc:390] Ignored output_format. W0000 00:00:1717996988.553699 14492 tf_tfl_flatbuffer_helpers.cc:393] Ignored drop_control_dependency. Dynamic Range Quantization tflite output complete! W0000 00:00:1717996988.857577 14492 tf_tfl_flatbuffer_helpers.cc:390] Ignored output_format. W0000 00:00:1717996988.857598 14492 tf_tfl_flatbuffer_helpers.cc:393] Ignored drop_control_dependency. Segmentation fault (core dumped)


The difference for the models I mapped to edgetpu and this model is that this model has lstm layer, dense and cnn networks are quantizing and mapping perfectly, but lstm network is causing problems. 
karen-ca commented 2 months ago

I solved the problem, I combine tactics in Train an LSTM weather forecasting model for the Coral Edge TPU notebook and in nobuco pytorch to keras converter. Now i have this result. Thank you!

Screenshot 2024-06-10 at 11 43 44