Closed thomaszheng closed 2 years ago
Hi @thomaszheng ! Model binary sizes are closely correlated to the number of ops used in the model. TensorFlow Lite enables you to reduce model binary sizes by using selective builds.
You can reduce the size by using multiple tflite models built using select ops or reducing no of supported target_archs (Remove armeabi-v7a if possible). Thanks!
Thank you for your reply。 my app now use only one model and armeabi-v7a also can't remove, so i'm looking for other options.
this 23 not built-in ops need 6.5M aar, but whole tensorflow-lite.aar
only 5M with x86,x86_64,arm64-v8a,armeabi-v7a, and after reduce it only 1.28M for arm64-v8a,armeabi-v7a.
["AddN","BiasAddGrad","BroadcastGradientArgs","Cast","ConcatOffset","EmptyTensorList","ReluGrad","Restore","Save","ShapeN","SigmoidGrad","StridedSliceGrad","TensorListElementShape","TensorListFromTensor","TensorListGetItem","TensorListLength","TensorListPopBack","TensorListPushBack","TensorListReserve","TensorListSetItem","TensorListStack","UnsortedSegmentSum","ZerosLike"]
so I feel like there is still a lot of room for optimization.
Ok @thomaszheng ! Can you put this line during lite conversion for optimum size and let us know after rebuilding the aar files from it?
converter.optimizations = [tf.lite.Optimize.OPTIMIZE_FOR_SIZE
converter.convert()
ok @mohantym ! I used this conversion command before (tf version 2.8.0):
converter = tf.lite.TFLiteConverter.from_saved_model(SAVED_MODEL_DIR)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
model.tflite size is 4M
1277026 Apr 12 09:34 tensorflow-lite.aar*
6926947 Apr 12 09:17 tensorflow-lite-select-tf-ops.aar*
now i use:
converter = tf.lite.TFLiteConverter.from_saved_model(SAVED_MODEL_DIR)
converter.optimizations = [tf.lite.Optimize.OPTIMIZE_FOR_SIZE]
model_n.tflite size is 8.6M
then, reduce binary size arr:
bash tensorflow/lite/tools/build_aar.sh --input_models=/host_dir/model_n.tflite --target_archs=arm64-v8a,armeabi-v7a
1273647 Apr 12 08:53 tensorflow-lite.aar*
6926947 Apr 12 09:17 tensorflow-lite-select-tf-ops.aar*
from the results, it seems that there is no decrease
@sachinprasadhs ! Could you please look at this issue?
@miaout17 hi,Is there any other way please ?
Hi @thomaszheng !
Have you checked this document to reduce binary size using two tflite models with different configurations. You can create multiple TFLite models using Dynamic range/ Float 16 or Int 16 quantization and use the Bash command with specific targets to get a lighter model. Other options can be model optimization which might render TFlite model in KBs.
Thank you!
@mohantym i have used quantization to get a lighter model,and it also reduce tflite model size from 8M to 4M,and I will try again. and now i think the size of tensorflow-lite.aar(1.2M) and tensorflow-lite-select-tf-ops.aar(6.5M) is too big.
My model is implemented with reference to this tutorial model_personalization, it will train on android device.
and I implement the same function using MNN,and it size of lib so 3.27M (arm64-v8a) and 1.79M (armabi-v7a).
This issue has been automatically marked as stale because it has no recent activity. It will be closed if no further activity occurs. Thank you.
Closing as stale. Please reopen if you'd like to work on this further.
@thomaszheng ! Were you able to reduce the file size further ? Feel free to re-open this issue if you are still looking for further assistance. Thank you!
yes, i want to use tflite if the file size no too big
System information
Describe the problem
I follow the guide reduce_binary_size and execute this commands: model.tflite
and get the tensorflow-lite-select-tf-ops.aar 6.5M
but its size is still too big,can i further reduce it size ? only build cpu or build without Delegate?
Any other info / logs
Intermediate file in tmp :
ops_list.txt