tensorflow / tflite-micro

Infrastructure to enable deployment of ML models to low-power resource-constrained embedded targets (including microcontrollers and digital signal processors).
Apache License 2.0
1.96k stars 834 forks source link

tflite micro image preprocessing #342

Closed gitE0Z9 closed 3 years ago

gitE0Z9 commented 3 years ago

Hi, I got confused about how to do image preprocessing like below, I am using himax board.

s,z = input_details[0]['quantization']

x = cv2.imread(i,cv2.IMREAD_GRAYSCALE)
x = cv2.resize(x,(224,224))
x = x/255
x = x/s+z
x = np.array([x],dtype=np.int8)
x = np.expand_dims(x,3)

But the image input on this board is uint8 and rescale to int8 by their sdk.

hx_drv_sensor_capture(&g_pimg_config);

hx_drv_image_rescale((uint8_t*)g_pimg_config.raw_address,
                     g_pimg_config.img_width, g_pimg_config.img_height,
                     image_data, image_width, image_height);

I am wondering how should I fix this problem in cpp to quantize and normalize input in float32 then convert back to int8 ?

Could someone share some thoughts?

The model is full integer quantized.

gitE0Z9 commented 3 years ago

Here is my workaround, it doesn't seem efficient though:

for(uint i=0; i< input->bytes;i++){
    input->data.int8[i] = int(input->params.zero_point + (float(input->data.uint8[i]) / 255) / input->params.scale);
  }
njeffrie commented 3 years ago

Rather than convert uint8->float->int8, you should be able to convert directly between quantized types. It's worth noting that conceptually we are doing the same thing, just skipping some intermediate steps. Since scale int uint8 is the same as scale for asymmetric quantized int8, we only need to convert zero point from 128 to the new int8 zero point. This means that this code should convert unit8->int8:

for(uint i=0; i< input->bytes;i++){
    input->data.int8[i] = input->data.uint8[i] + input->params.zero_point - 128; 
}