Open creativesh opened 2 years ago
Looks like you included the logic of decoding a JPEG images into the tflite model. Usually, a vision tflite model takes a decoded RGB image buffer as input directly. Your val catBitmap = getBitmapFromAsset("bwr.jpg")
already decoded the image into a bitmap. You can update your tflite conversion code, and strip of image preprocessing. Without decoder, hopefully you don't need to depend on tf.lite.OpsSet.SELECT_TF_OPS
, and can save binary size.
@lu-wang-g
Thanks for your answer, would you please tell me exactly which part of the code I should change ? Sorry, I did not understand. I know my original network takes .gfile as input and I just have the .pb file of the model, not the train code to change the input layer of the original model.
any comment to resolve this issue ?
You can try converting your model with signature def. See https://www.tensorflow.org/lite/guide/signatures#convert_a_model_with_signatures.
Add signature of your desired inputs/outputs in the SavedModel, so that JPEG decoding is not included in the graph.
@lu-wang-g I have read the documentation and wrote some code, but I faced some errors.
`meta_path = 'ckpt_tagging/sensifai_tagging.ckpt.meta' # Your .meta file
output_node_names = ['multi_predictions'] # Output nodes
checkpoint_path = 'ckpt_tagging/sensifai_tagging.ckpt'
export_path='test_remove/'
with tf.Session() as sess:
# Restore the graph
saver = tf.train.import_meta_graph(meta_path)
# Load weights
model= saver.restore(sess,checkpoint_path)
@tf.function()
def my_predict(my_prediction_inputs):
inputs = {
'my_serving_input': 'resnet_v1_101/Pad/paddings',
}
#prediction = model(inputs)
return {'my_serving_input': my_prediction_inputs}
my_signatures = my_predict.get_concrete_function(
my_prediction_inputs=tf.TensorSpec([None, None], dtype=tf.dtypes.float32, name="resnet_v1_101/Pad/")
)
# Save the model.
tf.saved_model.save(
model,
export_dir=export_path,
signatures=my_signatures
)`
which produces the error:
ValueError: Expected a Trackable object for export, got None.
where is the problem ? Can I do it with directly remove nodes which are related to decode part of the graph ? In either cases , would you please recommend me some sample code ? I myself have found this link for direct strip : (https://stackoverflow.com/questions/40358892/wipe-out-dropout-operations-from-tensorflow-graph)
I'm not very familiar with how to customize a saved_model and remove certain nodes from the graph. Please open a new issue in the TF github repo. Our TF team will help you from there.
@lu-wang-g Thanks, I have removed the decode nodes from the beginning of the graph and added a new input with desired type and shape, but it hurt the accuracy.
What is the accuracy gap between on-device inference and training? Other than decoding, what else have been changed?
@lu-wang-g The main network top 1 accuracy is almost 89% , however, the network without map bloack almost labels randomly!!!!!!!!!
this is the image of the original network :
and this is the image of pruned network:
I removed the map block which decoded the input image and so, I had to remove the old input node with byte type and add a new one with type float 32 this is my tflite model link : https://drive.google.com/file/d/1Ke46il3g3xi70RPNa8KKUWhGfJj81ORX/view?usp=sharing
Besides decoding image, what else does the map block do?
unfortunately , I dont have any complete information about the map block.
Hi , I have a tflite model trained with tensorflow 1.x I have converted my model to tflite with below code:
I have tested my tflite model with python interpreter and got desirable output by this code:
Now I want to write and inference in android studio 7.2.1 , the model is not loaded in ML folder and so I should load it with interpreter like this:
until hear, everything is ok, but when I try to feed input image to my model with the below code:
I face this error :
java.lang.IllegalArgumentException: Internal error: Failed to run on the given Interpreter: Unknown image file format. One of JPEG, PNG, GIF, BMP required. (while executing 'DecodeBmp' via Eager)
while according to Netron, the input type of my model should be sting[1] as I provided in my code. would you please help me to fix it ? what is my mistake?