Closed hewumars closed 9 months ago
另外部署SNPE的onnx为什么会增加类似量化的操作?
extra_prepare_dict: extra_qconfig_dict: w_observer: MSEObserver a_observer: EMAMSEObserver w_fakequantize: AdaRoundFakeQuantize a_fakequantize: QDropFakeQuantize w_qscheme: bit: 8 symmetry: False per_channel: True pot_scale: False p: 2.4 a_qscheme: bit: 8 symmetry: False per_channel: False pot_scale: False p: 2.4 quantize: backend: SNPE quantize_type: advanced_ptq # support naive_ptq or advanced_ptq cali_batchsize: 16 reconstruction: pattern: block scale_lr: 4.0e-5 warm_up: 0.2 weight: 0.1 max_count: 20000 b_range: [20,2] keep_gpu: True round_mode: learned_hard_sigmoid prob: 0.5 deploy: output_path: /home/mars/hewu/nvme1n1/03_datesets/01_PublicDataset/imagenet-mini model_name: 'mbv2' model: # architecture details type: mobilenet_v2 # model name kwargs: num_classes: 1000 path: /path-of-pretrained data: path: /home/mars/hewu/nvme1n1/03_datesets/01_PublicDataset/imagenet-mini batch_size: 64 num_workers: 4 pin_memory: True input_size: 224 test_resize: 256 process: seed: 1005
This issue has not received any updates in 120 days. Please reply to this issue if this still unresolved!
另外部署SNPE的onnx为什么会增加类似量化的操作?