PINTO0309 / onnx2tf

Self-Created Tools to convert ONNX files (NCHW) to TensorFlow/TFLite/Keras format (NHWC). The purpose of this tool is to solve the massive Transpose extrapolation problem in onnx-tensorflow (onnx-tf). I don't need a Star, but give me a pull request.
MIT License
712 stars 73 forks source link

Error performing quantization-aware training (QAT) with keras onnx2tf generated model #611

Closed CarlosNacher closed 7 months ago

CarlosNacher commented 7 months ago

Issue Type

Others

OS

Windows

onnx2tf version number

1.19.16

onnx version number

1.15.0

onnxruntime version number

1.17.1

onnxsim (onnx_simplifier) version number

0.4.33

tensorflow version number

2.15.0

Download URL for ONNX

https://we.tl/t-HJjXDINMND

Parameter Replacement JSON

None

Description

1. Purpose: Create a INT8 model for Edge TPU 2. What: I have succesfully converted my ONNX model to TF ones (float32, float16, integer, fully_integer...), i.e. the code of onnx actually works as expected. The problem I am trying to solve is not the strict scope of onnx2tf, but I think it would be cool it could be perfomed and maybe QAT could be added to onnx2tf supported conversions. Having said that, with the keras model returned by the following code:

keras_model = onnx2tf.convert(
    input_onnx_file_path=ONNX_MODEL_PATH,
    overwrite_input_shape=overwrite_input_shape,
    disable_model_save=True
)

I am trying to do QAT (as detailed in https://www.tensorflow.org/model_optimization/guide/quantization/training_comprehensive_guide#create_and_deploy_quantized_model):

import tensorflow_model_optimization as tfmot
quant_aware_model = tfmot.quantization.keras.quantize_model(keras_model)

but I am getting the following error:

ValueError: Exception encountered when calling layer "tf.compat.v1.pad" (type TFOpLambda).

Shape must be rank 4 but is rank 5 for '{{node tf.compat.v1.pad/Pad}} = Pad[T=DT_FLOAT, Tpaddings=DT_INT32](tf.compat.v1.pad/Pad/input, tf.compat.v1.pad/Pad/paddings)' with input shapes: [1,1,1024,608,3], [4,2].

Call arguments received by layer "tf.compat.v1.pad" (type TFOpLambda):
  • tensor=['tf.Tensor(shape=(1, 1024, 608, 3), dtype=float32)']
  • paddings=[['0', '0'], ['3', '3'], ['3', '3'], ['0', '0']]
  • mode='CONSTANT'
  • name=None
  • constant_values=0

Do you know why it is trying [1, 1, 1024, 608, 3] (rank 5) inside the pad operation instead of rank 4, and/or how can we fix that?

Thank you so much in advance and thank you so much for your valuable work in developing this repo!!!

By the way, I am trying to perform QAT as an alternative of PTQ because with the model_full_integer_quant.tflite generated by onnx2tf.convert my inferences look all with the same value, as I detailed in https://github.com/PINTO0309/onnx2tf/issues/610

PINTO0309 commented 7 months ago

I don't know what tfmot.quantization.keras.quantize_mode is doing internally, but it appears that there is no 5D Pad in the keras_model.summary() result. The error Shape must be rank 4 but is rank 7 is unintelligible.

import onnx2tf

"""
onnxsim model.onnx model.onnx \
--overwrite-input-shape "input:1,3,1024,608"

pip install -U \
    tensorflow_model_optimization==0.8.0 \
    onnx2tf==1.20.0 \
    tensorflow==2.16.1 \
    tf-keras~=2.16
"""

keras_model = \
    onnx2tf.convert(
        input_onnx_file_path='model.onnx',
        disable_model_save=True
    )

print('******************************************************')
keras_model.summary()

import tensorflow_model_optimization as tfmot
quant_aware_model = \
    tfmot.quantization.keras.quantize_model(
        to_quantize=keras_model,
        quantized_layer_name_prefix='quant_'
    )

print('******************************************************')
quant_aware_model.summary()
    :
    :
INFO: 77 / 77
INFO: onnx_op_type: Add onnx_op_name: /Add
INFO:  input_name.1: /Mul_2_output_0 shape: [1, 1, 1024, 608] dtype: float32
INFO:  input_name.2: /Mul_3_output_0 shape: [1, 1, 1024, 608] dtype: float32
INFO:  output_name.1: output shape: [1, 1, 1024, 608] dtype: float32
INFO: tf_op_type: add
INFO:  input.1.x: name: tf.math.multiply_8/Mul:0 shape: (1, 1024, 608, 1) dtype: <dtype: 'float32'> 
INFO:  input.2.y: name: tf.math.multiply_13/Mul:0 shape: (1, 1024, 608, 1) dtype: <dtype: 'float32'> 
INFO:  output.1.output: name: tf.math.add_22/Add:0 shape: (1, 1024, 608, 1) dtype: <dtype: 'float32'> 

******************************************************
Model: "model"
__________________________________________________________________________________________________
 Layer (type)                Output Shape                 Param #   Connected to                  
==================================================================================================
 input (InputLayer)          [(1, 1024, 608, 3)]          0         []                            

 tf.math.subtract (TFOpLamb  (1, 1024, 608, 3)            0         ['input[0][0]']               
 da)                                                                                              

 tf.math.divide (TFOpLambda  (1, 1024, 608, 3)            0         ['tf.math.subtract[0][0]']    
 )                                                                                                

 tf.compat.v1.pad (TFOpLamb  (1, 1026, 610, 3)            0         ['tf.math.divide[0][0]']      
 da)                                                                                              

 tf.nn.convolution_2 (TFOpL  (1, 512, 304, 32)            0         ['tf.compat.v1.pad[0][0]']    
 ambda)                                                                                           

 tf.math.add_2 (TFOpLambda)  (1, 512, 304, 32)            0         ['tf.nn.convolution_2[0][0]'] 

 tf.nn.relu_2 (TFOpLambda)   (1, 512, 304, 32)            0         ['tf.math.add_2[0][0]']       

 tf.compat.v1.pad_1 (TFOpLa  (1, 514, 306, 32)            0         ['tf.nn.relu_2[0][0]']        
 mbda)                                                                                            

 tf.nn.convolution_3 (TFOpL  (1, 256, 152, 32)            0         ['tf.compat.v1.pad_1[0][0]']  
 ambda)                                                                                           

 tf.math.add_3 (TFOpLambda)  (1, 256, 152, 32)            0         ['tf.nn.convolution_3[0][0]'] 

 tf.nn.relu_3 (TFOpLambda)   (1, 256, 152, 32)            0         ['tf.math.add_3[0][0]']       

 tf.compat.v1.pad_2 (TFOpLa  (1, 258, 154, 32)            0         ['tf.nn.relu_3[0][0]']        
 mbda)                                                                                            

 tf.nn.convolution_6 (TFOpL  (1, 128, 76, 64)             0         ['tf.compat.v1.pad_2[0][0]']  
 ambda)                                                                                           

 tf.math.add_6 (TFOpLambda)  (1, 128, 76, 64)             0         ['tf.nn.convolution_6[0][0]'] 

 tf.nn.relu_6 (TFOpLambda)   (1, 128, 76, 64)             0         ['tf.math.add_6[0][0]']       

 tf.compat.v1.pad_3 (TFOpLa  (1, 130, 78, 64)             0         ['tf.nn.relu_6[0][0]']        
 mbda)                                                                                            

 tf.nn.convolution_9 (TFOpL  (1, 64, 38, 64)              0         ['tf.compat.v1.pad_3[0][0]']  
 ambda)                                                                                           

 tf.math.add_9 (TFOpLambda)  (1, 64, 38, 64)              0         ['tf.nn.convolution_9[0][0]'] 

 tf.nn.relu_9 (TFOpLambda)   (1, 64, 38, 64)              0         ['tf.math.add_9[0][0]']       

 tf.compat.v1.pad_4 (TFOpLa  (1, 66, 40, 64)              0         ['tf.nn.relu_9[0][0]']        
 mbda)                                                                                            

 tf.nn.convolution_12 (TFOp  (1, 32, 19, 64)              0         ['tf.compat.v1.pad_4[0][0]']  
 Lambda)                                                                                          

 tf.math.add_12 (TFOpLambda  (1, 32, 19, 64)              0         ['tf.nn.convolution_12[0][0]']
 )                                                                                                

 tf.nn.relu_10 (TFOpLambda)  (1, 32, 19, 64)              0         ['tf.math.add_12[0][0]']      

 tf.nn.convolution_13 (TFOp  (1, 25, 12, 64)              0         ['tf.nn.relu_10[0][0]']       
 Lambda)                                                                                          

 tf.math.add_13 (TFOpLambda  (1, 25, 12, 64)              0         ['tf.nn.convolution_13[0][0]']
 )                                                                                                

 tf.compat.v1.image.resize_  (1, 15, 8, 64)               0         ['tf.math.add_13[0][0]']      
 bilinear (TFOpLambda)                                                                            

 tf.compat.v1.pad_5 (TFOpLa  (1, 19, 12, 64)              0         ['tf.compat.v1.image.resize_bi
 mbda)                                                              linear[0][0]']                

 tf.nn.convolution_14 (TFOp  (1, 16, 9, 64)               0         ['tf.compat.v1.pad_5[0][0]']  
 Lambda)                                                                                          

 tf.math.add_14 (TFOpLambda  (1, 16, 9, 64)               0         ['tf.nn.convolution_14[0][0]']
 )                                                                                                

 tf.nn.relu_11 (TFOpLambda)  (1, 16, 9, 64)               0         ['tf.math.add_14[0][0]']      

 tf.compat.v1.image.resize_  (1, 32, 19, 64)              0         ['tf.nn.relu_11[0][0]']       
 bilinear_1 (TFOpLambda)                                                                          

 tf.compat.v1.pad_7 (TFOpLa  (1, 36, 23, 64)              0         ['tf.compat.v1.image.resize_bi
 mbda)                                                              linear_1[0][0]']              

 tf.nn.convolution_15 (TFOp  (1, 33, 20, 64)              0         ['tf.compat.v1.pad_7[0][0]']  
 Lambda)                                                                                          

 tf.math.add_15 (TFOpLambda  (1, 33, 20, 64)              0         ['tf.nn.convolution_15[0][0]']
 )                                                                                                

 tf.nn.relu_12 (TFOpLambda)  (1, 33, 20, 64)              0         ['tf.math.add_15[0][0]']      

 tf.compat.v1.image.resize_  (1, 63, 37, 64)              0         ['tf.nn.relu_12[0][0]']       
 bilinear_3 (TFOpLambda)                                                                          

 tf.compat.v1.pad_8 (TFOpLa  (1, 67, 41, 64)              0         ['tf.compat.v1.image.resize_bi
 mbda)                                                              linear_3[0][0]']              

 tf.nn.convolution_16 (TFOp  (1, 64, 38, 64)              0         ['tf.compat.v1.pad_8[0][0]']  
 Lambda)                                                                                          

 tf.math.add_16 (TFOpLambda  (1, 64, 38, 64)              0         ['tf.nn.convolution_16[0][0]']
 )                                                                                                

 tf.nn.relu_13 (TFOpLambda)  (1, 64, 38, 64)              0         ['tf.math.add_16[0][0]']      

 tf.compat.v1.image.resize_  (1, 128, 76, 64)             0         ['tf.nn.relu_13[0][0]']       
 bilinear_4 (TFOpLambda)                                                                          

 tf.compat.v1.pad_9 (TFOpLa  (1, 132, 80, 64)             0         ['tf.compat.v1.image.resize_bi
 mbda)                                                              linear_4[0][0]']              

 tf.nn.convolution_17 (TFOp  (1, 129, 77, 64)             0         ['tf.compat.v1.pad_9[0][0]']  
 Lambda)                                                                                          

 tf.math.add_17 (TFOpLambda  (1, 129, 77, 64)             0         ['tf.nn.convolution_17[0][0]']
 )                                                                                                

 tf.nn.relu_14 (TFOpLambda)  (1, 129, 77, 64)             0         ['tf.math.add_17[0][0]']      

 tf.compat.v1.image.resize_  (1, 255, 151, 64)            0         ['tf.nn.relu_14[0][0]']       
 bilinear_5 (TFOpLambda)                                                                          

 tf.nn.convolution (TFOpLam  (1, 1021, 605, 128)          0         ['tf.math.divide[0][0]']      
 bda)                                                                                             

 tf.compat.v1.pad_10 (TFOpL  (1, 259, 155, 64)            0         ['tf.compat.v1.image.resize_bi
 ambda)                                                             linear_5[0][0]']              

 tf.math.add (TFOpLambda)    (1, 1021, 605, 128)          0         ['tf.nn.convolution[0][0]']   

 tf.nn.convolution_1 (TFOpL  (1, 1021, 605, 128)          0         ['tf.math.divide[0][0]']      
 ambda)                                                                                           

 tf.nn.convolution_18 (TFOp  (1, 256, 152, 64)            0         ['tf.compat.v1.pad_10[0][0]'] 
 Lambda)                                                                                          

 tf.nn.relu (TFOpLambda)     (1, 1021, 605, 128)          0         ['tf.math.add[0][0]']         

 tf.math.add_1 (TFOpLambda)  (1, 1021, 605, 128)          0         ['tf.nn.convolution_1[0][0]'] 

 tf.math.add_18 (TFOpLambda  (1, 256, 152, 64)            0         ['tf.nn.convolution_18[0][0]']
 )                                                                                                

 tf.compat.v1.nn.avg_pool (  (1, 510, 302, 128)           0         ['tf.nn.relu[0][0]']          
 TFOpLambda)                                                                                      

 tf.nn.relu_1 (TFOpLambda)   (1, 1021, 605, 128)          0         ['tf.math.add_1[0][0]']       

 tf.nn.relu_15 (TFOpLambda)  (1, 256, 152, 64)            0         ['tf.math.add_18[0][0]']      

 tf.nn.convolution_4 (TFOpL  (1, 507, 299, 256)           0         ['tf.compat.v1.nn.avg_pool[0][
 ambda)                                                             0]']                          

 tf.compat.v1.nn.avg_pool_1  (1, 510, 302, 128)           0         ['tf.nn.relu_1[0][0]']        
  (TFOpLambda)                                                                                    

 tf.compat.v1.image.resize_  (1, 511, 303, 64)            0         ['tf.nn.relu_15[0][0]']       
 bilinear_6 (TFOpLambda)                                                                          

 tf.math.add_4 (TFOpLambda)  (1, 507, 299, 256)           0         ['tf.nn.convolution_4[0][0]'] 

 tf.nn.convolution_5 (TFOpL  (1, 507, 299, 256)           0         ['tf.compat.v1.nn.avg_pool_1[0
 ambda)                                                             ][0]']                        

 tf.compat.v1.pad_11 (TFOpL  (1, 515, 307, 64)            0         ['tf.compat.v1.image.resize_bi
 ambda)                                                             linear_6[0][0]']              

 tf.nn.relu_4 (TFOpLambda)   (1, 507, 299, 256)           0         ['tf.math.add_4[0][0]']       

 tf.math.add_5 (TFOpLambda)  (1, 507, 299, 256)           0         ['tf.nn.convolution_5[0][0]'] 

 tf.nn.convolution_19 (TFOp  (1, 512, 304, 64)            0         ['tf.compat.v1.pad_11[0][0]'] 
 Lambda)                                                                                          

 tf.compat.v1.nn.avg_pool_2  (1, 253, 149, 256)           0         ['tf.nn.relu_4[0][0]']        
  (TFOpLambda)                                                                                    

 tf.nn.relu_5 (TFOpLambda)   (1, 507, 299, 256)           0         ['tf.math.add_5[0][0]']       

 tf.math.add_19 (TFOpLambda  (1, 512, 304, 64)            0         ['tf.nn.convolution_19[0][0]']
 )                                                                                                

 tf.nn.convolution_7 (TFOpL  (1, 251, 147, 256)           0         ['tf.compat.v1.nn.avg_pool_2[0
 ambda)                                                             ][0]']                        

 tf.compat.v1.nn.avg_pool_3  (1, 253, 149, 256)           0         ['tf.nn.relu_5[0][0]']        
  (TFOpLambda)                                                                                    

 tf.nn.relu_16 (TFOpLambda)  (1, 512, 304, 64)            0         ['tf.math.add_19[0][0]']      

 tf.math.add_7 (TFOpLambda)  (1, 251, 147, 256)           0         ['tf.nn.convolution_7[0][0]'] 

 tf.nn.convolution_8 (TFOpL  (1, 251, 147, 256)           0         ['tf.compat.v1.nn.avg_pool_3[0
 ambda)                                                             ][0]']                        

 tf.compat.v1.image.resize_  (1, 248, 144, 64)            0         ['tf.nn.relu_16[0][0]']       
 bilinear_7 (TFOpLambda)                                                                          

 tf.nn.relu_7 (TFOpLambda)   (1, 251, 147, 256)           0         ['tf.math.add_7[0][0]']       

 tf.math.add_8 (TFOpLambda)  (1, 251, 147, 256)           0         ['tf.nn.convolution_8[0][0]'] 

 tf.nn.convolution_20 (TFOp  (1, 248, 144, 64)            0         ['tf.compat.v1.image.resize_bi
 Lambda)                                                            linear_7[0][0]']              

 tf.nn.convolution_10 (TFOp  (1, 248, 144, 384)           0         ['tf.nn.relu_7[0][0]']        
 Lambda)                                                                                          

 tf.nn.relu_8 (TFOpLambda)   (1, 251, 147, 256)           0         ['tf.math.add_8[0][0]']       

 tf.math.add_20 (TFOpLambda  (1, 248, 144, 64)            0         ['tf.nn.convolution_20[0][0]']
 )                                                                                                

 tf.math.add_10 (TFOpLambda  (1, 248, 144, 384)           0         ['tf.nn.convolution_10[0][0]']
 )                                                                                                

 tf.nn.convolution_11 (TFOp  (1, 248, 144, 768)           0         ['tf.nn.relu_8[0][0]']        
 Lambda)                                                                                          

 tf.nn.relu_17 (TFOpLambda)  (1, 248, 144, 64)            0         ['tf.math.add_20[0][0]']      

 tf.math.subtract_1 (TFOpLa  (1, 248, 144, 384)           0         ['tf.math.add_10[0][0]']      
 mbda)                                                                                            

 tf.math.add_11 (TFOpLambda  (1, 248, 144, 768)           0         ['tf.nn.convolution_11[0][0]']
 )                                                                                                

 tf.nn.convolution_21 (TFOp  (1, 248, 144, 384)           0         ['tf.nn.relu_17[0][0]']       
 Lambda)                                                                                          

 tf.math.divide_1 (TFOpLamb  (1, 248, 144, 384)           0         ['tf.math.subtract_1[0][0]']  
 da)                                                                                              

 tf.strided_slice (TFOpLamb  (1, 248, 144, 384)           0         ['tf.math.add_11[0][0]']      
 da)                                                                                              

 tf.math.add_21 (TFOpLambda  (1, 248, 144, 384)           0         ['tf.nn.convolution_21[0][0]']
 )                                                                                                

 tf.strided_slice_1 (TFOpLa  (1, 248, 144, 384)           0         ['tf.math.add_11[0][0]']      
 mbda)                                                                                            

 tf.math.subtract_2 (TFOpLa  (1, 248, 144, 384)           0         ['tf.math.divide_1[0][0]',    
 mbda)                                                               'tf.strided_slice[0][0]']    

 tf.math.subtract_4 (TFOpLa  (1, 248, 144, 384)           0         ['tf.math.add_21[0][0]',      
 mbda)                                                               'tf.strided_slice_1[0][0]']  

 tf.math.multiply_6 (TFOpLa  (1, 248, 144, 384)           0         ['tf.math.subtract_2[0][0]',  
 mbda)                                                               'tf.math.subtract_2[0][0]']  

 tf.math.multiply_11 (TFOpL  (1, 248, 144, 384)           0         ['tf.math.subtract_4[0][0]',  
 ambda)                                                              'tf.math.subtract_4[0][0]']  

 tf.math.reduce_mean (TFOpL  (1, 248, 144, 1)             0         ['tf.math.multiply_6[0][0]']  
 ambda)                                                                                           

 tf.math.reduce_mean_1 (TFO  (1, 248, 144, 1)             0         ['tf.math.multiply_11[0][0]'] 
 pLambda)                                                                                         

 tf.compat.v1.pad_6 (TFOpLa  (1, 256, 152, 1)             0         ['tf.math.reduce_mean[0][0]'] 
 mbda)                                                                                            

 tf.compat.v1.pad_12 (TFOpL  (1, 256, 152, 1)             0         ['tf.math.reduce_mean_1[0][0]'
 ambda)                                                             ]                             

 tf.compat.v1.image.resize_  (1, 1024, 608, 1)            0         ['tf.compat.v1.pad_6[0][0]']  
 bilinear_2 (TFOpLambda)                                                                          

 tf.compat.v1.image.resize_  (1, 1024, 608, 1)            0         ['tf.compat.v1.pad_12[0][0]'] 
 bilinear_8 (TFOpLambda)                                                                          

 tf.math.subtract_3 (TFOpLa  (1, 1024, 608, 1)            0         ['tf.compat.v1.image.resize_bi
 mbda)                                                              linear_2[0][0]']              

 tf.math.subtract_5 (TFOpLa  (1, 1024, 608, 1)            0         ['tf.compat.v1.image.resize_bi
 mbda)                                                              linear_8[0][0]']              

 tf.math.multiply_7 (TFOpLa  (1, 1024, 608, 1)            0         ['tf.math.subtract_3[0][0]']  
 mbda)                                                                                            

 tf.math.multiply_12 (TFOpL  (1, 1024, 608, 1)            0         ['tf.math.subtract_5[0][0]']  
 ambda)                                                                                           

 tf.math.divide_2 (TFOpLamb  (1, 1024, 608, 1)            0         ['tf.math.multiply_7[0][0]']  
 da)                                                                                              

 tf.math.divide_3 (TFOpLamb  (1, 1024, 608, 1)            0         ['tf.math.multiply_12[0][0]'] 
 da)                                                                                              

 tf.math.multiply_8 (TFOpLa  (1, 1024, 608, 1)            0         ['tf.math.divide_2[0][0]']    
 mbda)                                                                                            

 tf.math.multiply_13 (TFOpL  (1, 1024, 608, 1)            0         ['tf.math.divide_3[0][0]']    
 ambda)                                                                                           

 output (TFOpLambda)         (1, 1024, 608, 1)            0         ['tf.math.multiply_8[0][0]',  
                                                                     'tf.math.multiply_13[0][0]'] 

==================================================================================================
Total params: 0 (0.00 Byte)
Trainable params: 0 (0.00 Byte)
Non-trainable params: 0 (0.00 Byte)
__________________________________________________________________________________________________
Traceback (most recent call last):
  File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/lib/python3.10/runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "/home/xxxx/.vscode/extensions/ms-python.debugpy-2024.5.11001012-linux-x64/bundled/libs/debugpy/adapter/../../debugpy/launcher/../../debugpy/__main__.py", line 39, in <module>
    cli.main()
  File "/home/xxxx/.vscode/extensions/ms-python.debugpy-2024.5.11001012-linux-x64/bundled/libs/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py", line 430, in main
    run()
  File "/home/xxxx/.vscode/extensions/ms-python.debugpy-2024.5.11001012-linux-x64/bundled/libs/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py", line 284, in run_file
    runpy.run_path(target, run_name="__main__")
  File "/home/xxxx/.vscode/extensions/ms-python.debugpy-2024.5.11001012-linux-x64/bundled/libs/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 321, in run_path
    return _run_module_code(code, init_globals, run_name,
  File "/home/xxxx/.vscode/extensions/ms-python.debugpy-2024.5.11001012-linux-x64/bundled/libs/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 135, in _run_module_code
    _run_code(code, mod_globals, init_globals,
  File "/home/xxxx/.vscode/extensions/ms-python.debugpy-2024.5.11001012-linux-x64/bundled/libs/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 124, in _run_code
    exec(code, run_globals)
  File "/home/xxxx/work/qat/test.py", line 24, in <module>
    quant_aware_model = \
  File "/home/xxxx/.local/lib/python3.10/site-packages/tensorflow_model_optimization/python/core/quantization/keras/quantize.py", line 141, in quantize_model
    return quantize_apply(
  File "/home/xxxx/.local/lib/python3.10/site-packages/tensorflow_model_optimization/python/core/keras/metrics.py", line 74, in inner
    raise error
  File "/home/xxxx/.local/lib/python3.10/site-packages/tensorflow_model_optimization/python/core/keras/metrics.py", line 69, in inner
    results = func(*args, **kwargs)
  File "/home/xxxx/.local/lib/python3.10/site-packages/tensorflow_model_optimization/python/core/quantization/keras/quantize.py", line 475, in quantize_apply
    _extract_original_model(model_copy)
  File "/home/xxxx/.local/lib/python3.10/site-packages/tensorflow_model_optimization/python/core/quantization/keras/quantize.py", line 398, in _extract_original_model
    unwrapped_model = keras.models.clone_model(
  File "/home/xxxx/.local/lib/python3.10/site-packages/tf_keras/src/models/cloning.py", line 540, in clone_model
    return _clone_functional_model(
  File "/home/xxxx/.local/lib/python3.10/site-packages/tf_keras/src/models/cloning.py", line 230, in _clone_functional_model
    ) = functional.reconstruct_from_config(
  File "/home/xxxx/.local/lib/python3.10/site-packages/tf_keras/src/engine/functional.py", line 1504, in reconstruct_from_config
    if process_node(layer, node_data):
  File "/home/xxxx/.local/lib/python3.10/site-packages/tf_keras/src/engine/functional.py", line 1444, in process_node
    output_tensors = layer(input_tensors, **kwargs)
  File "/home/xxxx/.local/lib/python3.10/site-packages/tf_keras/src/utils/traceback_utils.py", line 70, in error_handler
    raise e.with_traceback(filtered_tb) from None
  File "/home/xxxx/.local/lib/python3.10/site-packages/tensorflow/python/framework/ops.py", line 1037, in _create_c_op
    raise ValueError(e.message)
ValueError: Exception encountered when calling layer "tf.compat.v1.pad" (type TFOpLambda).

Shape must be rank 4 but is rank 7 for '{{node tf.compat.v1.pad/Pad}} = Pad[T=DT_FLOAT, Tpaddings=DT_INT32](tf.compat.v1.pad/Pad/input, tf.compat.v1.pad/Pad/paddings)' with input shapes: [1,1,1,1,1024,608,3], [4,2].

Call arguments received by layer "tf.compat.v1.pad" (type TFOpLambda):
  • tensor=['tf.Tensor(shape=(1, 1, 1, 1024, 608, 3), dtype=float32)']
  • paddings=[['0', '0'], ['1', '1'], ['1', '1'], ['0', '0']]
  • mode='CONSTANT'
  • name=None
  • constant_values=0
CarlosNacher commented 7 months ago

Okay, I'll investigate why tfmot.quantization.keras.quantize_model could be raising this error and if I find any solution and reason I'll post it here.

By the way, is it in your plans to include QAT functionality to onnx2tf? it would be cool and if you are not planning to work on it, I could (if I manage to solve possible problems like this)!

In the meantime, it seems that QAT doesn't work well with TFOpLambda, as posted in tensorflow issues:

nor withtf.compat.v1 :(

PINTO0309 commented 7 months ago

https://github.com/AlexanderLutsenko/nobuco

https://github.com/alibaba/TinyNeuralNetwork

By the way, is it in your plans to include QAT functionality to onnx2tf?

No.

github-actions[bot] commented 7 months ago

If there is no activity within the next two days, this issue will be closed automatically.