ModelTC / MQBench

Model Quantization Benchmark
Apache License 2.0
767 stars 140 forks source link

请问,有对不同BenkendType之间转换的工具,比如Academic转到SNPE的后端? #267

Closed hewumars closed 9 months ago

hewumars commented 1 year ago

image 另外部署SNPE的onnx为什么会增加类似量化的操作?

  1. 部署命令参考这里:https://mqbench.readthedocs.io/en/latest/user_guide/deploy/snpe.html
  2. 训练代码:${MQBench}/application/imagenet_example/PTQ/ptq/ptq.py
  3. ptq config文件如下:
    extra_prepare_dict:
    extra_qconfig_dict:
        w_observer: MSEObserver
        a_observer: EMAMSEObserver
        w_fakequantize: AdaRoundFakeQuantize
        a_fakequantize: QDropFakeQuantize
        w_qscheme:
            bit: 8
            symmetry: False
            per_channel: True
            pot_scale: False
            p: 2.4
        a_qscheme:
            bit: 8
            symmetry: False
            per_channel: False
            pot_scale: False
            p: 2.4
    quantize:
    backend: SNPE
    quantize_type: advanced_ptq # support naive_ptq or advanced_ptq
    cali_batchsize: 16
    reconstruction:
        pattern: block
        scale_lr: 4.0e-5
        warm_up: 0.2
        weight: 0.1
        max_count: 20000
        b_range: [20,2]
        keep_gpu: True
        round_mode: learned_hard_sigmoid
        prob: 0.5
    deploy:
        output_path: /home/mars/hewu/nvme1n1/03_datesets/01_PublicDataset/imagenet-mini
        model_name: 'mbv2'
    model:                    # architecture details
    type: mobilenet_v2        # model name
    kwargs:
        num_classes: 1000
    path: /path-of-pretrained
    data:
    path: /home/mars/hewu/nvme1n1/03_datesets/01_PublicDataset/imagenet-mini
    batch_size: 64
    num_workers: 4
    pin_memory: True
    input_size: 224
    test_resize: 256
    process:
    seed: 1005
github-actions[bot] commented 9 months ago

This issue has not received any updates in 120 days. Please reply to this issue if this still unresolved!