tensorflow / compression

Data compression in TensorFlow
Apache License 2.0
850 stars 248 forks source link

RuntimeError: Encountered unresolved custom op: UnboundedIndexRangeEncode. number 3 (UnboundedIndexRangeEncode) failed to prepare. Node number 108 (WHILE) failed to invoke. #119

Closed prmudgal closed 2 years ago

prmudgal commented 2 years ago

Describe the bug import argparse import io import os import sys import urllib from absl import app from absl.flags import argparse_flags import cv2 import numpy as np import tensorflow.compat.v1 as tf

import tensorflow as tf

from tensorflow.python import pywrap_tensorflow import tensorflow_compression as tfc # pylint:disable=unused-import

This line takes time as the model is huge

url1 = 'https://storage.googleapis.com/tensorflow_compression/metagraphs/hific-lo.metagraph'

r = requests.get(url1, allow_redirects=True)

open('hific-lo.metagraph', 'wb').write(r.content)

with tf.Session() as sess: saver = tf.train.import_meta_graph('hific-lo.metagraph')

saver.restore(sess, latest_checkpoint_path)

inputs=None outputs=None signature='sender' request = urllib.request.urlopen('https://storage.googleapis.com/tensorflow_compression/metagraphs/hific-lo.metagraph') # replace it with your local path and model try: string = request.read() finally: request.close() metagraph = tf.compat.v1.MetaGraphDef() loaded = metagraph.ParseFromString(string)

wrapped_import = tf.compat.v1.wrap_function(lambda: tf.compat.v1.train.import_meta_graph('hific-lo.metagraph'), []) graph = wrapped_import.graph print("****Step 1 ok *****") inputs = metagraph.signature_def['sender'].inputs

concrete_function = metagraph.signature_def['sender']

inputs = [graph.as_graph_element(inputs[k].name) for k in sorted(inputs)] print("****Step 2 ok *****") outputs = metagraph.signature_def[signature].outputs

outputs = [graph.as_graph_element(outputs[k].name) for k in sorted(outputs)] print("*Step 3 ok****") converter = tf.lite.TFLiteConverter.from_session(sess, inputs, outputs) converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS, tf.lite.OpsSet.SELECT_TF_OPS] converter.allow_custom_ops = True converter.optimizations = [tf.lite.Optimize.DEFAULT] tflite_model = converter.convert() print("*Step 4 ok****") open("converted_model.tflite", "wb").write(tflite_model) # replace the path with your local path print("*****Saved tflite model****")

converter = tf.lite.TFLiteConverter.from_saved_model(curr_dir + "saved_model_2")

converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS, tf.lite.OpsSet.SELECT_TF_OPS]

converter.allow_custom_ops = True

converter.optimizations = [tf.lite.Optimize.DEFAULT]

tflite_model = converter.convert()

Load TFLITE model

input_image = np.array(np.random.random_sample([1, 128, 128, 3]), dtype=np.uint8) interpreter = tf.lite.Interpreter("converted_model.tflite") interpreter.allocate_tensors() sign_list=interpreter.get_signature_list() print("*****Loaded tflite model****")

obtaining the input-output shapes and types

input_details = interpreter.get_input_details() output_details = interpreter.get_output_details()

input_data_resized = np.reshape(input_image,(1,128,128,3)) interpreter.resize_tensor_input(input_details[0]['index'], (1, 128,128,3)) interpreter.allocate_tensors() interpreter.set_tensor(input_details[0]['index'],input_data_resized)

interpreter.allocate_tensors()

interpreter.invoke()

Expected behavior interpreter should be invoked without error. But it give me below error - RuntimeError: Encountered unresolved custom op: UnboundedIndexRangeEncode. number 3 (UnboundedIndexRangeEncode) failed to prepare. Node number 108 (WHILE) failed to invoke.

See git here

System (please complete the following information):

Additional context Add any other context about the problem here.

jonycgn commented 2 years ago

Hi, unfortunately TF Lite is not going to support the range coding ops natively, as they aren't built in to TensorFlow. Have you tried following up with the TF Lite developers? There might be ways to include custom TF ops in TF Lite.

prmudgal commented 2 years ago

@jonycgn Thanks for your reply. I followed up with TFLite initially and got suggestion to move this bug/feature to tensorflow/compression.

jonycgn commented 2 years ago

So, I don't have much experience with TF Lite, but my impression is that you have two options:

prmudgal commented 2 years ago

Thanks. I will follow up with TFLite to see if they can support this request. A question which is not related to this thread, but very much related to HifiC.

I retrained HifiC model and in order to use tfci file in its current form, I need to pack .meta, .index, and .data-of-00000-00001 file as .metagraph. Can you please share if there is a defined process for doing it?

Advance apologies as I know it is unrelated to this thread.

jonycgn commented 2 years ago

I responded on the thread in the Google group. Closing this for now.