ml-gde / e2e-tflite-tutorials

Project tracking of the "Mobile ML Working Group", for the End-to-End TensorFlow Lite tutorials.
Apache License 2.0
133 stars 26 forks source link

OCR TFLITE #38

Closed tulasiram58827 closed 3 years ago

tulasiram58827 commented 3 years ago

Previously we succesfully created TFLite models for CRAFT text detector. But text detectors are generally aren't much use if it's not combined with OCR and I think there aren't much opensource tflite models are available in OCR. So I did as small research and initially converted captcha ocr to tflite which is giving almost same results as original model.

Please find this repo for the source code.

FYI : Also observed that training with tf-nightly resulted in improvement of accuracy (Compared with tensorflow-2.3 keeping all other parameters kept constant)

@sayakpaul

sayakpaul commented 3 years ago

On-device OCR is pretty useful in my opinion.

Just for more clarification, @tulasiram58827 took the model as shown in this official Keras example, was able to successfully convert it to TFLite. He also plans to try out EasyOCR to see if their pre-trained models could be converted to TFLite. This will be particularly useful since EasyOCR has multilingual support and the performance of their pre-trained models is vetted.

@tulasiram58827 could you send a PR enlisting this project?

Cc: @khanhlvg @margaretmz

tulasiram58827 commented 3 years ago

Created PR

tulasiram58827 commented 3 years ago

Hi everyone, While converting EasyOCR to ONNX I am facing not supported operator issues. So I decided to convert Keras OCR to TFLite and succesfully converted to TFLite. Also succesfully did inference and also benchmarked the dynamic range and float16 quantized model. I am in the process of creating small dataset required for integer quantization.

Please follow this notebook for the conversion process and inference and also the benchmarks. @sayakpaul

CC: @khanhlvg @margaretmz

sayakpaul commented 3 years ago

This is really wonderful.

Keras OCR is indeed a fantastic project that allows us to run off-the-shelf OCR inference and even fine-tune OCR models. @tulasiram58827 I would suggest publishing these models on TF Hub as well.

Also, here are a couple of comments on the notebook -

tulasiram58827 commented 3 years ago

Created PR for publishing TFLite Models. Updated Notebook with all the points mentioned.

sayakpaul commented 3 years ago

Looks good to me.

So, in build_model the CTC Decoder part is being discarded. Right? You can lament about it in the notebook. You can also clear out the unnecessary outputs. The rest of the things look pretty good.

tulasiram58827 commented 3 years ago

Done.

tulasiram58827 commented 3 years ago

Problems with Integer Quantizations:

1. Integer Quantization:

I am successfully able to convert using Integer Quantization . But while doing inferencing I am facing issues:

RuntimeError: tensorflow/lite/kernels/kernel_util.cc:309 scale_diff / output_scale <= 0.02 was not true.Node number 22 (FULLY_CONNECTED) failed to prepare.

2. Fully Integer Quantization:

This is the error log while converting using Fully Integer Quantization technique:

RuntimeError: Quantization not yet supported for op: 'FLOOR'. Quantization not yet supported for op: 'CAST'. Quantization not yet supported for op: 'CAST'. Quantization not yet supported for op: 'CAST'. Quantization not yet supported for op: 'FLOOR'. Quantization not yet supported for op: 'CAST'. Quantization not yet supported for op: 'CAST'. Quantization not yet supported for op: 'CAST'. Quantization not yet supported for op: 'ADD_N'. Quantization not yet supported for op: 'REVERSE_V2'. Quantization not yet supported for op: 'REVERSE_V2'. Quantization not yet supported for op: 'EXP'. Quantization not yet supported for op: 'DIV'.

You can reproduce the same with this Notebook

@khanhlvg

tulasiram58827 commented 3 years ago

Hi @khanhlvg

I have been working on conversion of EasyOCR to TFLite . I have been getting this error while converting to TFLite.

ConverterError: input resource[0] expected type resource != float, the type of assignvariableop_resource_0[0] In {{node AssignVariableOp}}

I successfully converted the PyTorch Model to ONNX and ran the inference with sample data and the results are matching correctly. So I hope there are no issues in PyTorch --> ONNX conversion. Even there are no issues in converting to TensorFlow SavedModel. I am getting the error while converting the SavedModel to TFLite.

FYI: Model consists of 2 layer Bi-Directional LSTM cells.

I have attached all the mentioned details in this Notebook . To reproduce the above error you can use the same Notebook.

tulasiram58827 commented 3 years ago

Hi @khanhlvg

To study the above issue further and find out the reason I created a simple LSTM layer model in PyTorch and tried converting to TFLite. I came across the same error mentioned in the previous comment. Sharing the Notebook with you.

khanhlvg commented 3 years ago

The issue came from the fact that the SavedModel contains mutable variable, which isn't supported by TFLite Converter. I think it may come from the PyTorch -> ONNX -> TF conversion pipeline rather than from the fact that it's a LSTM and not being supported. I'm waiting for a TFLite engineer do take a further investigation. I'll keep you posted.

tulasiram58827 commented 3 years ago

Okay. Thanks, @khanhlvg for the update.

sayakpaul commented 3 years ago

What about this one @khanhlvg?

https://github.com/ml-gde/e2e-tflite-tutorials/issues/38#issuecomment-734921176

Could you shed some light on it?

khanhlvg commented 3 years ago

38 (comment)

Could you shed some light on it?

Unfortunately they are the limitation of the current TFLite integer quantization engine. There isn't much we can do about that.

sayakpaul commented 3 years ago