google-ai-edge / LiteRT

LiteRT is the new name for TensorFlow Lite (TFLite). While the name is new, it's still the same trusted, high-performance runtime for on-device AI, now with an expanded vision.
https://ai.google.dev/edge/litert
Apache License 2.0
170 stars 14 forks source link

Bias fails to broadcast in the context of matmul in tf lite model #166

Open gaikwadrahul8 opened 5 days ago

gaikwadrahul8 commented 5 days ago

1. System information

2. Code

This is the minimized code to reproduce the issue:

import tensorflow as tf
import numpy as np

x1 = tf.constant([1., 2.], shape=[1, 2])

class Model(tf.keras.Model):
  def __init__(self):
    super(Model, self).__init__()
    self.w = tf.Variable([[3., 4.], [5., 6.]])
    self.b = tf.Variable([3.])
  @tf.function(input_signature=[tf.TensorSpec(x1.shape, x1.dtype)])
  def call(self, x):
    return tf.matmul(x, self.w) + self.b

m = Model()
print('Keras mode output: ', m(x1).numpy())

converter = tf.lite.TFLiteConverter.from_keras_model(m)
tflite_model = converter.convert()
def _evaluateTFLiteModel(tflite_model, input_data):
    interpreter = tf.lite.Interpreter(model_content=tflite_model)
    interpreter.allocate_tensors()

    input_details = interpreter.get_input_details()
    output_details = interpreter.get_output_details()

    for i in range(len(input_data)):
        interpreter.set_tensor(input_details[i]['index'], input_data[i])

    interpreter.invoke()

    output_data = [interpreter.get_tensor(output_details[i]['index'])
                   for i in range(len(output_details))]
    return output_data

print('Lite mode output: ', _evaluateTFLiteModel(tflite_model,[x1])[0])

3. Failure after conversion

Output:

Keras mode output: [[16. 19.]]
Lite mode output:  RuntimeError: tensorflow/lite/kernels/fully_connected.cc:360 NumElements(bias) != SizeOfDimension(filter, 0) (1 != 2)Node number 0 (FULLY_CONNECTED) failed to prepare.Failed to apply the default TensorFlow Lite delegate indexed at 0.

Conversion Failure:

gaikwadrahul8 commented 4 days ago

This issue originally reported by @YaoJiayi has been moved to this dedicated repository for LiteRT to enhance issue tracking and prioritization. To ensure continuity, we have created this new issue on your behalf.

We appreciate your understanding and look forward to your continued involvement.

pkgoogle commented 16 hours ago

Original Issue: https://github.com/tensorflow/tensorflow/issues/60929