I am not 100% certain, but it seems like these issues are misplaced and should be designated as model-optimization issues.
There seems to be a workaround via using tf.nn.conv2d instead of tf.keras.layers.Conv2D, but as far as I can tell this would require layer subclassing which, based on other issues, is still buggy when it comes to quantization.
Code to reproduce the issue
See afformentioned issues.
Describe the bug Tensorflow model optimization fails to quantize dilated convolution layers.
System information
TensorFlow version (installed from source or binary): source
TensorFlow Model Optimization version (installed from source or binary): source
Python version: 3.10.12
Describe the expected behavior
Quantizing dilated convolutions should be essentially the same as any other layer.
Describe the current behavior
Either
tf
ortfmot
is silently failing. There is the following very old issue describing exactly this:https://github.com/tensorflow/tensorflow/issues/26797
There is a slightly newer open issue showing that this was never resolved:
https://github.com/tensorflow/tensorflow/issues/53025
I am not 100% certain, but it seems like these issues are misplaced and should be designated as model-optimization issues.
There seems to be a workaround via using
tf.nn.conv2d
instead oftf.keras.layers.Conv2D
, but as far as I can tell this would require layer subclassing which, based on other issues, is still buggy when it comes to quantization.Code to reproduce the issue See afformentioned issues.