Open Kmoment opened 1 year ago
@Kmoment, we are currently working towards making the pip package for aimet-onnx available. In the meantime, if you would like you could use the source code on this repo and compile it to generate a package.
@quic-mangal, Hi, I success generate aimte-onnx package. But I ran into an error like this:
sim = QuantizationSimModel(model=model, File "/home/intsig/anaconda3/lib/python3.8/site-packages/aimet_onnx-0.0.1-py3.8.egg/aimet_onnx/quantsim.py", line 125, in init quantsim_configurator = self._addconfiguration(config_file) File "/home/intsig/anaconda3/lib/python3.8/site-packages/aimet_onnx-0.0.1-py3.8.egg/aimet_onnx/quantsim.py", line 137, in _addconfiguration quantsim_configurator.configure_quantizers(self.qc_quantize_op_dict, self.param_names, self.activation_names) File "/home/intsig/anaconda3/lib/python3.8/site-packages/aimet_onnx-0.0.1-py3.8.egg/aimet_onnx/quantsim_config/quantsim_config.py", line 117, in configure_quantizers self._override_param_bw_dtype(self._param_names, self._default_data_type, self._default_param_bw) File "/home/intsig/anaconda3/lib/python3.8/site-packages/aimet_onnx-0.0.1-py3.8.egg/aimet_onnx/quantsim_config/quantsim_config.py", line 178, in _override_param_bw_dtype self._quant_ops_dict[param_name].data_type = data_type File "/home/intsig/anaconda3/lib/python3.8/site-packages/aimet_onnx-0.0.1-py3.8.egg/aimet_onnx/qc_quantize_op.py", line 100, in data_type self.quant_info.isIntDataType = False AttributeError: 'aimet_common.libquant_info.QcQuantizeInfo' object has no attribute 'isIntDataType'
Can you give me some suggestion? Thanks!
I used:
Aimet torch-gpu-1.25.0 AimetCommon torch-gpu-1.25.0 AimetTorch torch-gpu-1.25.0
@Kmoment, thanks for the update!
This feature to support different datatypes is under development so the cpp files that you have are not fully updated. There are two things you could do-
1) Build the cpp files as well 2) Comment out these lines in your aimet_onnx code https://github.com/quic/aimet/blob/develop/TrainingExtensions/onnx/src/python/aimet_onnx/qc_quantize_op.py#:~:text=%40property,isIntDataType%20%3D%20True
and change line 79 to self.data_type = QuantizationDataType.int
@Kmoment, we are currently working towards making the pip package for aimet-onnx available. In the meantime, if you would like you could use the source code on this repo and compile it to generate a package.
Hi.I get this error when I use aimetonnx:
` , param_type = conn_graph_op.parameters[param_name]
KeyError: '2539' This is the information where the 2539 node is located:
input: "721"
input: "729"
input: "2539"
output: "730"
name: "stage2.0.fuse_layers.0.1.2"
op_type: "Resize"
`
In onnx 2539 is a scale value.What's the problem with this? I ran the resnet18 example that worked, without the resize operator.
@haohao-qw Sorry, Sorry, I don't know how to solve your problem. If model is Pytorch, you can try
torch.onnx.export(fp32_model.eval(), dummy_input, filename, training=torch.onnx.TrainingMode.PRESERVE, export_params=True, do_constant_folding=False)
@haohao-qw, thanks for reporting this. Can you try exporting it the way @Kmoment suggested. In the meantime, we will look at how to reproduce this bug and solve it. Tagging @quic-mtuttle as well
@quic-mangal Hi, I found aimet_onnx only support "fold_all_batch_norms_to_weight" and "equalize_model". But aimet_torch/aimet_tensorflow support "auto_quant" and "ada_quant".
Currently I can use aimet_onnx tools quant model, but the effect is not good. When can it be used "auto_quant/ada_quant" in aimet_onnx?
BR.
We are currently working on adding these features. Though, I don't have a good ETA for when they will be available.
Hi, I use commad in aimet:
I ran into an error like this:
how do i get aimet_onnx ?
Thanks.