Open samhodge opened 5 years ago
Found AN answer on a plate https://github.com/intel/webml-polyfill/tree/master/examples/semantic_segmentation/model
@samhodge mind sharing whats your inference speed when using xception with that kind of resolution?
About 0.2 fps with a NVIDIA GTX 1060, some of that time is post processing of the semantic mask from INT8 to antialiased float.
Do you know how to produce a TFLite file of any arbitary dimension from the deeplab models here:
https://github.com/tensorflow/models/blob/master/research/deeplab/g3doc/model_zoo.md
I got pretty close.
I have some test code
Which executes flawlessly
but for my own model that I have converted
It is erroring on the line
.invoke()
the .pb file was created using the export_model.py script here:
https://github.com/tensorflow/models/blob/master/research/deeplab/export_model.py
Using the docs here https://github.com/tensorflow/models/blob/master/research/deeplab/g3doc/export_model.md
It is an xception_65 model
I quantised as follows
Which ends cleanly.
Now I know that this will take a while to run on a mobile phone but the end game is to run it on a GPU in OpenGL ES on Linux and Metal on Apple desktop.
Do you have any hints
To repeat here is the error message