Closed AfifaIshtiaq closed 2 years ago
Really difficult to make a comment based on that screenshot but it looks like the error might be occurring during the loading of the model which points to a TensorFlow related issue rather a Vitis-AI issue.
Do you have some form of custom optimizer in your model? Maybe the Layer-wise Adaptive Moments optimizer from the TensorFlow Addons module?
The clue here is in the TF error message ...you need to ensure that this optimizer is loaded as a custom object. Try something like this:
float_model = tf.keras.models.load_model('deep_rx.h5', custom_objects={'LAMB': LAMB})
If that doesn't work , try loading with compile=False and then recompiling (with .compile() ) before quantizing.
Can you please guide me where I should do the steps you asked for? inputs = [rx_data_input,tx_pilot_input,raw_ch_est_input] import tensorflow as tf # pylint: disable=g-bad-import-orde from tensorflow import keras
from tensorflow_model_optimization.python.core.quantization.keras import quantize
float_model = tf.keras.models.load_model('deep_rx.h5', custom_objects={'LAMB': LAMB}) from tensorflow_model_optimization.quantization.keras import vitis_quantize quantizer = vitis_quantize.VitisQuantizer(float_model) quantized_model = quantizer.quantize_model(calib_dataset=Dataset_Train, calib_step=100, calib_batch_size=10)
I did that I'm trying to see if it solve the issue thanks
Why are you using:
from tensorflow_model_optimization.python.core.quantization.keras import quantize
Why are you using:
from tensorflow_model_optimization.python.core.quantization.keras import quantize
I was able to recompile. It gives me the following issue. Ihave attached some of the files from my dataset in a .zip Dataset.zip file and a preprocessing script. Can you tell me where is the issue? My model accepts input like this in which In1,In2 In3,In4 In5,In6 are concatenated preprocess.txt
self.optimize_model()
File "/home/vitis-ai-user/.local/lib/python3.7/site-packages/tensorflow_model_optimization/python/core/quantization/keras/vitis/vitis_quantize.py", line 454, in optimize_model
self._create_optimized_model()
File "/home/vitis-ai-user/.local/lib/python3.7/site-packages/tensorflow_model_optimization/python/core/quantization/keras/vitis/vitis_quantize.py", line 228, in _create_optimized_model
quantize_strategy=self._quantize_strategy)
File "/home/vitis-ai-user/.local/lib/python3.7/site-packages/tensorflow_model_optimization/python/core/quantization/keras/vitis/vitis_quantize.py", line 700, in create_optimize_model
model, candidate_layers, layer_metadata)
File "/home/vitis-ai-user/.local/lib/python3.7/site-packages/tensorflow_model_optimization/python/core/quantization/keras/vitis/eight_bit/vitis_8bit_transforms_pipeline.py", line 74, in apply
model, configs, available_transforms, candidate_layers, layer_metadata)
File "/home/vitis-ai-user/.local/lib/python3.7/site-packages/tensorflow_model_optimization/python/core/quantization/keras/vitis/eight_bit/vitis_8bit_transforms_pipeline.py", line 50, in _apply_availables
layer_metadata).recursive_transform()
File "/home/vitis-ai-user/.local/lib/python3.7/site-packages/tensorflow_model_optimization/python/core/quantization/keras/vitis/graph_transformations/model_transformer.py", line 738, in recursive_transform
self.layer_metadata).transform()
File "/home/vitis-ai-user/.local/lib/python3.7/site-packages/tensorflow_model_optimization/python/core/quantization/keras/vitis/graph_transformations/model_transformer.py", line 704, in transform
self._set_layer_weights(layer, weights_map)
File "/home/vitis-ai-user/.local/lib/python3.7/site-packages/tensorflow_model_optimization/python/core/quantization/keras/vitis/graph_transformations/model_transformer.py", line 593, in _set_layer_weights
K.batch_set_value(weight_value_tuples)
File "/home/vitis-ai-user/.local/lib/python3.7/site-packages/tensorflow/python/util/traceback_utils.py", line 153, in error_handler
raise e.with_traceback(filtered_tb) from None
File "/home/vitis-ai-user/.local/lib/python3.7/site-packages/keras/backend.py", line 4019, in batch_set_value
x.assign(np.asarray(value, dtype=dtype_numpy(x)))
ValueError: Cannot assign value to variable ' BN-2-0/gamma:0': Shape mismatch.The variable shape (64,), and the assigned value shape (128,) are incompatible.
(vitis-ai-tensorflow2) Vitis-AI /workspace/models/ChEstModel > python3 Deep_RX_ptq_mod.py
[VAI INFO] Update custom_layer_type: []
Traceback (most recent call last):
File "Deep_RX_ptq_mod.py", line 82, in
Why are you using:
from tensorflow_model_optimization.python.core.quantization.keras import quantize
Please see the following issue
I don't see any answer to my question - Why are you using:
from tensorflow_model_optimization.python.core.quantization.keras import quantize
I'm not sure I understand what you mean by 'concatenated inputs'...can you post the model source code for the input layers?
You included a screenshot of a research paper - can you post the link to the complete paper?
from tensorflow_model_optimization.python.core.quantization.keras import quantize
I removed from tensorflow_model_optimization.python.core.quantization.keras import quantize I replaced it with from tensorflow_model_optimization.quantization.keras import vitis_quantize as I have to quantize my model.
The screenshot is not from paper. I'm provided with the inference .h5 model and the input size as mentioned in screenshot and dataset for now. I have to provide input in the format as in screenshot
@AfifaIshtiaq , Is your problem resolved, the issue is open for more than 1 month, I will close it , U can reopen it when U need.
Can you please let me know what's the cause of this issue