Closed dubrovin-sudo closed 1 year ago
Hi, @dubrovin-sudo
Thank you for bringing this issue to our attention and at the moment I'm not sure whether Tensorflow.js support converting activations to 16-bit integer
values and weights to 8-bit integer
values during model conversion from TensorFlow to TensorFlow.js like TensorFlow Lite
please refer this Post-training integer quantization with int16 activations official documentation
Meanwhile you can refer official documentation for tfjs-converter
, I'll check with our concerned team and will update you soon. Thank you!
Our converter only supports quantization annotation based on the node name, node type support is not available. Basically this means, you have to specify the node names (could be in regex) that needs to be in 16 or 8 bit integer, you cannot use the node op type (relu, conv2d, etc).
This issue has been marked stale because it has no recent activity since 7 days. It will be closed if no further activity occurs. Thank you.
This issue was closed due to lack of activity after being marked stale for past 7 days.
Dear gentlemen,
I dont understand how to do post-training integer quantization with int16 activations in tfjs-conveter:
For example, i post-training integer quantization I did like this:
How can I convert activations to 16-bit integer values and weights to 8-bit integer values during model conversion from TensorFlow to TFJS format?