PINTO0309 / tflite2tensorflow

Generate saved_model, tfjs, tf-trt, EdgeTPU, CoreML, quantized tflite, ONNX, OpenVINO, Myriad Inference Engine blob and .pb from .tflite. Support for building environments with Docker. It is possible to directly access the host PC GUI and the camera to verify the operation. NVIDIA GPU (dGPU) support. Intel iHD GPU (iGPU) support. Supports inverse quantization of INT8 quantization model.
https://qiita.com/PINTO
MIT License
258 stars 38 forks source link

Set batch_size during conversion #10

Closed satyajitghana closed 3 years ago

satyajitghana commented 3 years ago

How do i set the batch size when converting the model to tensortrt using

tflite2tensorflow \
--model_path tflite_from_saved_model/model_float32.tflite \
--flatc_path ../../flatc \
--schema_path ../../schema.fbs \
--string_formulas_for_normalization 'data / 255.0' \
--output_tftrt
PINTO0309 commented 3 years ago

https://github.com/PINTO0309/openvino2tensorflow#5-2-saved_model-to-tflite-convert

satyajitghana commented 3 years ago

Thanks @PINTO0309 i had completely missed it, my bad :/