tensorflow / tensorflow

An Open Source Machine Learning Framework for Everyone
https://tensorflow.org
Apache License 2.0
185.74k stars 74.21k forks source link

inference scipt for custom pose classification model ? for both .tflite a5 hdf5 format #58102

Closed akashAD98 closed 1 year ago

akashAD98 commented 1 year ago

im using window 10

i tried the blog to train pose model ,but there is no any inference script ,can you guys please provide script https://github.com/tensorflow/tensorflow/issues/58101

mohantym commented 1 year ago

Hi @akashAD98 ! Sorry for the late response.

You can set inference input and output as float32 as a work around.

converter.inference_input_type = tf.float32
converter.inference_output_type = tf.float32

Ref.

Detailed usage can be found here.

Could you close #58101 as duplicate to this issue.

Thank you!

akashAD98 commented 1 year ago

is that correct? @mohantym


converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.inference_input_type = tf.float32   #
converter.inference_output_type = tf.float32 #

converter.optimizations = [tf.lite.Optimize.DEFAULT]
tflite_model = converter.convert()

print('Model size: %dKB' % (len(tflite_model) / 1024))

with open('pose_classifier.tflite', 'wb') as f:
  f.write(tflite_model)
mohantym commented 1 year ago

@akashAD98 !

Yeah! That's the expected syntax for float32 quantization. Could you let us know whether the issue is resolved or not.

Thank you!

akashAD98 commented 1 year ago

sure @mohantym

akashAD98 commented 1 year ago

@mohantym its not working, im getting a tracker warning, & no output

python pose_estimation.py --label_file labels.txt --model movenet_thunder --classifier pose_classifierfloat32

size of converted tflite model is 27kb is this fine? ```

warning I'm getting

'''WARNING:root:No tracker will be used as tracker can only be enabled for MoveNet MultiPose model.'''

& I'm not using any tracker

default weight which provided by your repo is working fine, but my trained model is not working here

mohantym commented 1 year ago

Hi @akashAD98 ! For Raspberry pi , integer quantization is preferred to make the inference faster. For Android/Desktop , Float32 quantization is preferred.

It is bit hard to connect the dots between your tflite model and above inference script. Can you share your float32 conversion script along with inference script as colab gist.

Thank you!

akashAD98 commented 1 year ago

@mohantym i m using the script provided by TF. I just want to do inference of my trained model, so it s fine if it's not tf.lite weight even .hdf5 \ hf5 inferencing is fine, but that part is not provided here. my goal is to do inference of this model .

here is official colab for same https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/tutorials/pose_classification.ipynb

mohantym commented 1 year ago

Ok @akashAD98 !

Tracker is a fair warning which will trigger when mutipose model is not provided.

Got your point on inference with respect to custom pose estimation model though. On the prediction part, You can use either signature runner/ invoke the interpreter with test image to predict the output. Ref 1, 2.

Yeah! I also felt the need of sample inference code snippet (with visualization) in the documentation.

@sachinprasadhs ! Could you look at this issue.

Thank you!

pjpratik commented 1 year ago

Hi @akashAD98

The inference for the TFLite model is provided in the evaluate function of the notebook. Similar thing can be done for custom tflite model as well.

def evaluate_model(interpreter, X, y_true):
  """Evaluates the given TFLite model and return its accuracy."""
  input_index = interpreter.get_input_details()[0]["index"]
  output_index = interpreter.get_output_details()[0]["index"]

  # Run predictions on all given poses.
  y_pred = []
  for i in range(len(y_true)):
    # Pre-processing: add batch dimension and convert to float32 to match with
    # the model's input data format.
    test_image = X[i: i + 1].astype('float32')
    interpreter.set_tensor(input_index, test_image)

    # Run inference.
    interpreter.invoke()

    # Post-processing: remove batch dimension and find the class with highest
    # probability.
    output = interpreter.tensor(output_index)
    predicted_label = np.argmax(output()[0])
    y_pred.append(predicted_label)

  # Compare prediction results with ground truth labels to calculate accuracy.
  y_pred = keras.utils.to_categorical(y_pred)
  return accuracy_score(y_true, y_pred)

# Evaluate the accuracy of the converted TFLite model
classifier_interpreter = tf.lite.Interpreter(model_content=tflite_model)
classifier_interpreter.allocate_tensors()
print('Accuracy of TFLite model: %s' %
      evaluate_model(classifier_interpreter, X_test, y_test))

Thanks.

github-actions[bot] commented 1 year ago

This issue is stale because it has been open for 7 days with no activity. It will be closed if no further activity occurs. Thank you.

github-actions[bot] commented 1 year ago

This issue was closed because it has been inactive for 7 days since being marked as stale. Please reopen if you'd like to work on this further.