Open rose-jinyang opened 3 years ago
Can you try the following code to fix the size of the model during conversion
# Fix the model input batch size to 1
input_name = float_model.input_names[0]
index = float_model.input_names.index(input_name)
float_model.inputs[index].set_shape([1, 512, 512, 3])
converter = tf.lite.TFLiteConverter.from_keras_model(float_model)
tflite_model = converter.convert()
open("slimnet_float.tflite", "wb").write(tflite_model)
But I met the following issue.
Can u run the slimnet model on Phone GPU?
I tested it with mediapipe api in android(segmentation) and it worked at that time. I believe it uses the old gpu-delegate with OpenGL instead of OpenCL.
Hello How are you? Thanks for contributing to this project. I trained a model with SlimNet architecture on my dataset and got a TFlite model. This TFlite model works well on the CPU. I am going to use this TFLite model on mobile GPU. When loading this model on Android GPU, I got the following issue.
Caused by: java.lang.IllegalArgumentException: Internal error: Failed to apply delegate: Attempting to use a delegate that only supports static-sized tensors with a graph that has dynamic-sized tensors.
So I searched for any solution but I did NOT find a correct solution yet. https://github.com/tensorflow/tensorflow/issues/38036
Could u help me?