Open AlmogDavid opened 1 year ago
@quic-akhobare & @quic-klhsieh could you help reply to this.
Hi @AlmogDavid - the following details would help
Hi, thanks for replying. I train a facial Landmark detection it's a regression model. on a propriety dataset. The closest model is mobilenet v2. I trained the quant Sim model generated using the script I shared with you. I see its convergence.
The exporting just use export API and produce an ONNX model and the embeddings. In my post there is a link to snpe forum where I shared how I used snpe toolkit in order to get a DLC
I don't see anything wrong with your approach per-se. I followed your thread on the snpe forum and responded there. I am guessing that we are missing some command line arguments when invoking snpe.
Are you providing exactly the same calibration data set to snpe that also used for AIMET?
Hi, i trained a model using 16/8 configuration (attach the configuration JSON i used) and everything was fine during AIMET optimization. When I try to deploy the model the the DSP I use the commands mentioned in this post (that referred me to you): https://developer.qualcomm.com/forum/qdn-forums/software/qualcomm-neural-processing-sdk/70529
The run on the DSP gives bad results, not similar at all to the ones from AIMET training.
What am i doing wrong? Please don't tell me to talk with SNPE team as they told me to talk with you.
This is the configuration I'm using:
{ "defaults": { "ops": { "is_output_quantized": "True" }, "params": { "is_quantized": "True", "is_symmetric": "True" }, "strict_symmetric": "False", "per_channel_quantization": "True" }, "params": { "bias": { "is_quantized": "False" } }, "op_type": { "Squeeze": { "is_output_quantized": "False" }, "Pad": { "is_output_quantized": "False" }, "Mean": { "is_output_quantized": "False" } }, "supergroups": [ { "op_list": [ "Conv", "Relu" ] }, { "op_list": [ "ConvTranspose", "Relu" ] }, { "op_list": [ "Conv", "Clip" ] }, { "op_list": [ "Add", "Relu" ] }, { "op_list": [ "Gemm", "Relu" ] } ], "model_input": { "is_input_quantized": "True" }, "model_output": {} }
And this is the how I create the AIMET model: ` def _quant_model_aimet(self, image_dl: DataLoader): from aimet_common.defs import QuantScheme from aimet_torch.batch_norm_fold import fold_all_batch_norms as aimet_fold_all_batch_norms from aimet_torch.model_preparer import prepare_model as aimet_prepare_model from aimet_torch.auto_quant_v2 import AutoQuant from aimet_torch.quantsim import QuantizationSimModel
`