Open satyapreetsingh opened 2 years ago
@satyapreetsingh Thanks for the question! When you create an application with the model using TensorFlow Lite Micro(TFLM), all parameters that you mentioned are calculated by TFLM. The link below is an example of writing an application for an int8 model using TFLM and CMSIS-NN which in turn calculates any necessary parameters and invokes the correct API's.
https://github.com/ARM-software/ML-examples/blob/9fc4411bf2c38c5fe50d62d21bda04258d6f8b49/tflm-cmsisnn-mbed-image-recognition/image_recognition/main.cpp#L74 What I am trying to say is you shouldn't have to calculate those parameters :-)
@felix-johnny , Thank you for the reply. Actually I am not directly using Tensorflow Light Micro (TFLM). I am trying to write my own c file from the CMSIS-NN basic API's only. So, I tried to run the post quantization script. I need to derive the parameters from my tflite model. Also, I am new to TFLM. I need to run my CNN model on a R4F processor, I have compiled CMSIS-NN on R4F but I have no idea as to how to compile TFLM on R4F. Hence, I am trying to run my own CNN based c code using core CMSIS-NN API's. Is there a way, I can derive the output activation min and max parameters from the quantized tflite model I have.
Hi @satyapreetsingh, Cortex-R is not officially supported in TFLM but you could start with https://github.com/tensorflow/tflite-micro/tree/main/tensorflow/lite/micro/cortex_m_generic And add a case for R4 here https://github.com/tensorflow/tflite-micro/blob/main/tensorflow/lite/micro/tools/make/targets/cortex_m_generic_makefile.inc For example else ifeq ($(TARGET_ARCH), cortex-r4) CORE=R4 ARM_LDFLAGS := -Wl,--cpu=Cortex-R4 GCC_TARGET_ARCH := cortex-r4
See here how to include CMSIS-NN into the microlite lib: https://github.com/tensorflow/tflite-micro/tree/main/tensorflow/lite/micro/kernels/cmsis_nn
Once you have the microlite lib you can link it to your application and call TFLM with the CMSIS-NN optimized-kernels.
@satyapreetsingh One other detail.. when you'll have to change TARGET as cortex_m_generic instead of cortex_m_corstone_300. Let us know if it helped!
@felix-johnny and @mansnils
I will try to compile the microlite lib for cortex-r4. I will let you know the progress in due course of time. Thank you.
Hi, I am trying to deploy the MNIST example given in the following link(https://www.tensorflow.org/lite/performance/post_training_integer_quant). I could obtain the int8 weights, their zero shift and scaling values. But I do not understand what is the meaning of providing input offset, output offset and out activation min, max values for arm_convolve_s8() API. How do I get these values from the above quantized network in python ?
Anything in this regards is going to help me greatly. Thanks :)