google-coral / edgetpu

Coral issue tracker (and legacy Edge TPU API source)
https://coral.ai
Apache License 2.0
426 stars 125 forks source link

TensorFlow 2 SSD-Models support #592

Closed Petros626 closed 2 years ago

Petros626 commented 2 years ago

Hello,

I would like to know if both current SSD models (FPN 320x320 and 640x640) of the Model Zoo 2 are supported by Edge TPU. According to Google Coral, only the FPN 640x640 has been trained and compiled.

source: https://coral.ai/models/object-detection/

hjonnala commented 2 years ago

yes, both FPN 320x320 and 640x640 are supported by Edge TPU.

google-coral-bot[bot] commented 2 years ago

Are you satisfied with the resolution of your issue? Yes No

grinco commented 1 year ago

@Petros626 did you manage to export and compile the models for edge tpu?

Petros626 commented 1 year ago

@Petros626 did you manage to export and compile the models for edge tpu?

Hey,

The new Tensorflow 2 models can't exported an compiled, just from the theoretical thought, that we need a complete int8 model. You should train the models and then apply post quantization for being able to do this. In the future I planned to try this but it will take some time,because I'm busy with other stuff.

grinco commented 1 year ago

@Petros626 I thought models like ssd_mobilenet_v2_fpnlite_320x320_coco17_tpu-8 were int8. The problem that I experienced was that models were using operations that weren't part of TFLITE_BUILTINS_INT8, not sure if there was a way around this. I wouldn't mind retraining and quantizing the model otherwise.

Also, it's worth mentioning that I have whole 2 days of experience with TensorFlow, so might be speaking nonsense :D

Petros626 commented 1 year ago

@hjonnala I do not yet fully agree with your statement above, but I assume you are talking about post training quantization.

@grinco the names of the models can be confusing, but the models in TensorFlow 1 model zoo (https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf1_detection_zoo.md) had explicit the ending quantized, so you know this model operates with Quantization aware Training, conversions, after that don't need further conversions.

I have a hunch that the model FPN Lite with Post Quantization Training are compatible for the EdgeTPU, as obviously all floating pointnumbers are capped to integer. The issue you get, is maybe that you must allow custom operations for converting the model to an int8/uint8 model. Until today I only worked a lot with TensorFlow 1.15 and the Object Detection API, so if you train your models on a PC with an IDE there different methods to get your goal. If you stick with the API, As far as I know, Quantization Aware Training is not yet possible, but the other variant, which would possibly run with custom operations.

Petros626 commented 1 year ago

@hjonnala I do not yet fully agree with your statement above, but I assume you are talking about post training quantization.

@grinco the names of the models can be confusing, but the models in TensorFlow 1 model zoo (https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf1_detection_zoo.md) had explicit the ending quantized, so you know this model operates with Quantization aware Training, conversions, after that don't need further conversions.

I have a hunch that the model FPN Lite with Post Quantization Training are compatible for the EdgeTPU, as obviously all floating pointnumbers are capped to integer. The issue you get, is maybe that you must allow custom operations for converting the model to an int8/uint8 model. Until today I only worked a lot with TensorFlow 1.15 and the Object Detection API, so if you train your models on a PC with an IDE there different methods to get your goal. If you stick with the API, As far as I know, Quantization Aware Training is not yet possible, but the other variant, which would possibly run with custom operations.