Open naarkhoo opened 2 years ago
@naarkhoo In order to expedite the trouble-shooting process, could you please provide the entire URL of the repository which you are using. Please provide more details on the issue reported here. Thank you!
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you.
sorry for my late reply here is the colab https://drive.google.com/file/d/1iqUgeabbTgfixehGomDoj5eHGfHd8Lvt/view?usp=sharing and I made sure you have access to the files.
with the current code the model latency on android devices (average device) is 150ms - my goal is to make a model to work at 50ms - seems I have make sure the model works with uint8 data type.
@jaeyounkim for TF-MOT problems
"ssd_mobilenet_v2_320x320_coco17_tpu" is what "TensorFlow Object Detection API" provides. It is not the model officially supported by the Model Garden team. Let me check if the TensorFlow Model Optimization Toolkit (https://github.com/tensorflow/model-optimization) team can provide some help.
The model is not quantized that's all. Read the name and compare it to the quantized model you'll find the difference. You must do post training quantization for the required result.
Additionally to run faster your model you need a tflite model and possibly a hardware accelerator like Google coral usb accelerator
Hi,
I am in the process of my ssd model based on
ssd_mobilenet_v2_320x320_coco17_tpu
and I noticed the model works on float32 and not uint8 - I am curious how I can make that change ?Also I appreciate if you point me to other tricks that I can make my model run faster at inference level. for example larger kernel size ? or shallower model ? or some threshold ? I feel these recommendations/explanation can be helpful when it comes to optmization
here is the link to the colab notebook https://drive.google.com/file/d/1iqUgeabbTgfixehGomDoj5eHGfHd8Lvt/view?usp=sharing