Closed Petros626 closed 2 years ago
yes, both FPN 320x320 and 640x640 are supported by Edge TPU.
@Petros626 did you manage to export and compile the models for edge tpu?
@Petros626 did you manage to export and compile the models for edge tpu?
Hey,
The new Tensorflow 2 models
can't exported an compiled, just from the theoretical thought, that we need a complete int8
model. You should train the models and then apply post quantization
for being able to do this. In the future I planned to try this but it will take some time,because I'm busy with other stuff.
@Petros626 I thought models like ssd_mobilenet_v2_fpnlite_320x320_coco17_tpu-8 were int8. The problem that I experienced was that models were using operations that weren't part of TFLITE_BUILTINS_INT8, not sure if there was a way around this. I wouldn't mind retraining and quantizing the model otherwise.
Also, it's worth mentioning that I have whole 2 days of experience with TensorFlow, so might be speaking nonsense :D
@hjonnala I do not yet fully agree with your statement above, but I assume you are talking about post training quantization.
@grinco the names of the models can be confusing, but the models in TensorFlow 1 model zoo
(https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf1_detection_zoo.md) had explicit the ending quantized
, so you know this model operates with Quantization aware Training, conversions, after that don't need further conversions.
I have a hunch that the model FPN Lite with Post Quantization Training are compatible for the EdgeTPU, as obviously all floating point
numbers are capped to integer
. The issue you get, is maybe that you must allow custom operations for converting the model to an int8/uint8
model. Until today I only worked a lot with TensorFlow 1.15 and the Object Detection API, so if you train your models on a PC with an IDE there different methods to get your goal. If you stick with the API, As far as I know, Quantization Aware Training is not yet possible, but the other variant, which would possibly run with custom operations.
@hjonnala I do not yet fully agree with your statement above, but I assume you are talking about post training quantization.
@grinco the names of the models can be confusing, but the models in TensorFlow 1 model zoo
(https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf1_detection_zoo.md) had explicit the ending quantized
, so you know this model operates with Quantization aware Training, conversions, after that don't need further conversions.
I have a hunch that the model FPN Lite with Post Quantization Training are compatible for the EdgeTPU, as obviously all floating point
numbers are capped to integer
. The issue you get, is maybe that you must allow custom operations for converting the model to an int8/uint8
model. Until today I only worked a lot with TensorFlow 1.15 and the Object Detection API, so if you train your models on a PC with an IDE there different methods to get your goal. If you stick with the API, As far as I know, Quantization Aware Training is not yet possible, but the other variant, which would possibly run with custom operations.
Hello,
I would like to know if both current SSD models (FPN 320x320 and 640x640) of the Model Zoo 2 are supported by Edge TPU. According to Google Coral, only the FPN 640x640 has been trained and compiled.
source: https://coral.ai/models/object-detection/