google-ai-edge / LiteRT

LiteRT is the new name for TensorFlow Lite (TFLite). While the name is new, it's still the same trusted, high-performance runtime for on-device AI, now with an expanded vision.
https://ai.google.dev/edge/litert
Apache License 2.0
169 stars 13 forks source link

Ops listed in 'experimental_select_user_tf_ops' not being recognized by tf lite converter #155

Open gaikwadrahul8 opened 3 days ago

gaikwadrahul8 commented 3 days ago

1. System information

2. Code

#### Model Definition ---------

class CppTfTest(tf.Module):

    def __init__(self, name=None):
        super().__init__(name=name)

    @tf.function
    def call(self):

        frames = tf.range(600)

        bpm = tf.random.uniform(
            tf.TensorShape([600,]),
            minval=0,
            maxval=90,
            dtype=tf.dtypes.float64
            )

        return bpm, frames

#### Model Saving ------------

cpp_tf_test = CppTfTest()
tf.saved_model.save(
    cpp_tf_test,
    'cpp_tf_test',
    signatures=cpp_tf_test.call.get_concrete_function()
    )

#### Model Conversion -----------

converter = tf.lite.TFLiteConverter.from_saved_model('cpp_tf_test')

converter.target_spec = tf.lite.TargetSpec(
    supported_ops=[tf.lite.OpsSet.TFLITE_BUILTINS],
    experimental_select_user_tf_ops=[
        'RandomUniform', 'Mul'
        ]
)

tflite_model = converter.convert()

with open('cpp_tf_test.tflite', 'wb') as f:
  f.write(tflite_model)

ConverterError Traceback (most recent call last)

in <cell line: 47>() 45 #converter.allow_custom_ops=True 46 ---> 47 tflite_model = converter.convert() 48 49 with open('cpp_tf_test.tflite', 'wb') as f:

7 frames

/usr/local/lib/python3.10/dist-packages/tensorflow/lite/python/lite.py in wrapper(self, *args, kwargs) 960 def wrapper(self, *args, *kwargs): 961 # pylint: disable=protected-access --> 962 return self._convert_and_export_metrics(convert_func, args, kwargs) 963 # pylint: enable=protected-access 964

/usr/local/lib/python3.10/dist-packages/tensorflow/lite/python/lite.py in _convert_and_export_metrics(self, convert_func, *args, *kwargs) 938 self._save_conversion_params_metric() 939 start_time = time.process_time() --> 940 result = convert_func(self, args, *kwargs) 941 elapsed_time_ms = (time.process_time() - start_time) 1000 942 if result:

/usr/local/lib/python3.10/dist-packages/tensorflow/lite/python/lite.py in convert(self) 1245 graph_def) 1246 -> 1247 return self._convert_from_saved_model(graph_def) 1248 1249

/usr/local/lib/python3.10/dist-packages/tensorflow/lite/python/lite.py in _convert_from_saved_model(self, graph_def) 1128 converter_kwargs.update(quant_mode.converter_flags()) 1129 -> 1130 result = _convert_saved_model(**converter_kwargs) 1131 return self._optimize_tflite_model( 1132 result, quant_mode, quant_io=self.experimental_new_quantizer)

/usr/local/lib/python3.10/dist-packages/tensorflow/lite/python/convert_phase.py in wrapper(*args, **kwargs) 210 else: 211 report_error_message(str(converter_error)) --> 212 raise converter_error from None # Re-throws the exception. 213 except Exception as error: 214 report_error_message(str(error))

/usr/local/lib/python3.10/dist-packages/tensorflow/lite/python/convert_phase.py in wrapper(*args, kwargs) 203 def wrapper(*args, *kwargs): 204 try: --> 205 return func(args, kwargs) 206 except ConverterError as converter_error: 207 if converter_error.errors:

/usr/local/lib/python3.10/dist-packages/tensorflow/lite/python/convert.py in convert_saved_model(kwargs) 830 model_flags = build_model_flags(kwargs) 831 conversion_flags = build_conversion_flags(**kwargs) --> 832 data = convert( 833 model_flags.SerializeToString(), 834 conversion_flags.SerializeToString(),

/usr/local/lib/python3.10/dist-packages/tensorflow/lite/python/convert.py in convert(model_flags_str, conversion_flags_str, input_data_str, debug_info_str, enable_mlir_converter) 320 for error_data in _metrics_wrapper.retrieve_collected_errors(): 321 converter_error.append_error(error_data) --> 322 raise converter_error 323 324 return _run_deprecated_conversion_binary(model_flags_str,

ConverterError: :0: error: loc(callsite(callsite(fused["RandomUniform:", "random_uniform/RandomUniform@inference_call_11165"] at fused["StatefulPartitionedCall:", "StatefulPartitionedCall@inference_signature_wrapper_11173"]) at fused["StatefulPartitionedCall:", "StatefulPartitionedCall"])): 'tf.RandomUniform' op is neither a custom op nor a flex op

:0: note: loc(fused["StatefulPartitionedCall:", "StatefulPartitionedCall"]): called from :0: note: loc(callsite(callsite(fused["RandomUniform:", "random_uniform/RandomUniform@__inference_call_11165"] at fused["StatefulPartitionedCall:", "StatefulPartitionedCall@__inference_signature_wrapper_11173"]) at fused["StatefulPartitionedCall:", "StatefulPartitionedCall"])): Error code: ERROR_NEEDS_FLEX_OPS :0: error: loc(callsite(callsite(fused["Mul:", "random_uniform/Mul@__inference_call_11165"] at fused["StatefulPartitionedCall:", "StatefulPartitionedCall@__inference_signature_wrapper_11173"]) at fused["StatefulPartitionedCall:", "StatefulPartitionedCall"])): 'tf.Mul' op is neither a custom op nor a flex op :0: note: loc(fused["StatefulPartitionedCall:", "StatefulPartitionedCall"]): called from :0: note: loc(callsite(callsite(fused["Mul:", "random_uniform/Mul@__inference_call_11165"] at fused["StatefulPartitionedCall:", "StatefulPartitionedCall@__inference_signature_wrapper_11173"]) at fused["StatefulPartitionedCall:", "StatefulPartitionedCall"])): Error code: ERROR_NEEDS_FLEX_OPS :0: error: failed while converting: 'main': Some ops are not supported by the native TFLite runtime, you can enable TF kernels fallback using TF Select. See instructions: https://www.tensorflow.org/lite/guide/ops_select TF Select ops: Mul, RandomUniform Details: tf.Mul(tensor<600xf64>, tensor) -> (tensor<600xf64>) : {device = ""} tf.RandomUniform(tensor<1xi32>) -> (tensor<600xf64>) : {device = "", seed = 0 : i64, seed2 = 0 : i64}

I'm trying to convert this simple model to Tensorflow Lite using the _experimental_select_user_tfops flag to tell the converter what operations from the tf_ops set to include. I need this to run with just a subset of the tf_ops set since I have a bigger model that I need to optimize for a mobile app. I've try many things but the _experimental_select_user_tfops flag just doesn't seem to work.

gaikwadrahul8 commented 2 days ago

This issue originally reported by @anselmo0v has been moved to this dedicated repository for LiteRT to enhance issue tracking and prioritization. To ensure continuity, we have created this new issue on your behalf.

We appreciate your understanding and look forward to your continued involvement.