ModelTC / MQBench

Model Quantization Benchmark
Apache License 2.0
763 stars 140 forks source link

基于最新mqbench对yolox进行量化,选择backbend=tengine_u8时报错:AttributeError: 'dict' object has no attribute 'detach' #156

Closed RedHandLM closed 2 years ago

RedHandLM commented 2 years ago

使用UP框架基于最新mqbench对yolox进行QAT训练,选择backbend=tengine_u8 时报错:AttributeError: 'dict' object has no attribute 'detach'

以下是使用的QAT配置文件:

num_classes: &num_classes 13
runtime:
  aligned: true
    # async_norm: True
  special_bn_init: true
  task_names: quant_det
  runner:
    type: quant

quant:
  quant_type: qat
  deploy_backend: Tengine_u8
  cali_batch_size: 900
  prepare_args:
    extra_qconfig_dict:
      w_observer: MinMaxObserver
      a_observer: EMAMinMaxObserver
      w_fakequantize: FixedFakeQuantize
      a_fakequantize: FixedFakeQuantize
    leaf_module: [Space2Depth, FrozenBatchNorm2d]
    extra_quantizer_dict:
      additional_module_type: [ConvFreezebn2d, ConvFreezebnReLU2d]

mixup:
  type: yolox_mixup_cv2
  kwargs:
    extra_input: true
    input_size: [640, 640]
    mixup_scale: [0.8, 1.6]
    fill_color: 0

mosaic:
  type: mosaic
  kwargs:
    extra_input: true
    tar_size: 640
    fill_color: 0

random_perspective:
  type: random_perspective_yolox
  kwargs:
    degrees: 10.0 # 0.0
    translate: 0.1
    scale: [0.1, 2.0] # 0.5
    shear: 2.0 # 0.0
    perspective: 0.0
    fill_color: 0  # 0
    border: [-320, -320]

augment_hsv:
  type: augment_hsv
  kwargs:
    hgain: 0.015
    sgain: 0.7
    vgain: 0.4
    color_mode: BGR

flip:
  type: flip
  kwargs:
    flip_p: 0.5

to_tensor: &to_tensor
  type: custom_to_tensor

train_resize: &train_resize
  type: keep_ar_resize_max
  kwargs:
    max_size: 640
    random_size: [15, 25]
    scale_step: 32
    padding_type: left_top
    padding_val: 0

test_resize: &test_resize
  type: keep_ar_resize_max
  kwargs:
    max_size: 640
    padding_type: left_top
    padding_val: 0

dataset:
  train:
    dataset:
      type: coco
      kwargs:
        meta_file: train.json
        image_reader:
          type: fs_opencv
          kwargs:
            image_dir: &img_root /images/
            color_mode: BGR
        transformer: [*train_resize, *to_tensor]
    batch_sampler:
      type: base
      kwargs:
        sampler:
          type: dist
          kwargs: {}
        batch_size: 4
  test:
    dataset:
      type: coco
      kwargs:
        meta_file: &gt_file val.json
        image_reader:
          type: fs_opencv
          kwargs:
            image_dir: *img_root
            color_mode: BGR
        transformer: [*test_resize, *to_tensor]
        evaluator:
          type: COCO
          kwargs:
            gt_file: *gt_file
            iou_types: [bbox]
    batch_sampler:
      type: base
      kwargs:
        sampler:
          type: dist
          kwargs: {}
        batch_size: 4
  dataloader:
    type: base
    kwargs:
      num_workers: 4
      alignment: 32
      worker_init: true
      pad_type: batch_pad

trainer: # Required.
  max_epoch: &max_epoch 6             # total epochs for the training
  save_freq: 1
  test_freq: 1
  only_save_latest: false
  optimizer:                 # optimizer = SGD(params,lr=0.01,momentum=0.937,weight_decay=0.0005)
    register_type: yolov5
    type: SGD
    kwargs:
      lr: 0.0000003125
      momentum: 0.9
      nesterov: true
      weight_decay: 0.0      # weight_decay = 0.0005 * batch_szie / 64
  lr_scheduler:              # lr_scheduler = MultStepLR(optimizer, milestones=[9,14],gamma=0.1)
    warmup_epochs: 0        # set to be 0 to disable warmup. When warmup,  target_lr = init_lr * total_batch_size
    warmup_type: linear
    warmup_ratio: 0.001
    type: MultiStepLR
    kwargs:
      milestones: [2, 4]     # epochs to decay lr
      gamma: 0.1             # decay rate

saver:
  save_dir: checkpoints/yolox_s_ret_a1_comloc_quant_tengine
  results_dir: results_dir/yolox_s_ret_a1_comloc_quant_tengine
  resume_model: /United-Perception/train_config/pretrain/300_65_ckpt_best.pth
  auto_resume: True

ema:
  enable: false
  ema_type: exp
  kwargs:
    decay: 0.9998

net:
- name: backbone
  type: yolox_s
  kwargs:
    out_layers: [2, 3, 4]
    out_strides: [8, 16, 32]
    normalize: {type: mqbench_freeze_bn}
    act_fn: {type: Silu}
- name: neck
  prev: backbone
  type: YoloxPAFPN
  kwargs:
    depth: 0.33
    out_strides: [8, 16, 32]
    normalize: {type: mqbench_freeze_bn}
    act_fn: {type: Silu}
- name: roi_head
  prev: neck
  type: YoloXHead
  kwargs:
    num_classes: *num_classes
    width: 0.5
    num_point: &dense_points 1
    normalize: {type: mqbench_freeze_bn}
    act_fn: {type: Silu}
- name: post_process
  prev: roi_head
  type: retina_post_iou
  kwargs:
    num_classes: *num_classes
                                  # number of classes including backgroudn. for rpn, it's 2; for RetinaNet, it's 81
    cfg:
      cls_loss:
        type: quality_focal_loss
        kwargs:
          gamma: 2.0
      iou_branch_loss:
        type: sigmoid_cross_entropy
      loc_loss:
        type: compose_loc_loss
        kwargs:
          loss_cfg:
          - type: iou_loss
            kwargs:
              loss_type: giou
              loss_weight: 1.0
          - type: l1_loss
            kwargs:
              loss_weight: 1.0
      anchor_generator:
        type: hand_craft
        kwargs:
          anchor_ratios: [1]    # anchor strides are provided as feature strides by feature extractor
          anchor_scales: [4]   # scale of anchors relative to feature map
      roi_supervisor:
        type: atss
        kwargs:
          top_n: 9
          use_iou: true
      roi_predictor:
        type: base
        kwargs:
          pre_nms_score_thresh: 0.05    # to reduce computation
          pre_nms_top_n: 1000
          post_nms_top_n: 1000
          roi_min_size: 0                 # minimum scale of a valid roi
          merger:
            type: retina
            kwargs:
              top_n: 100
              nms:
                type: naive
                nms_iou_thresh: 0.65

以下是报错信息:

[MQBENCH] INFO: Enable observer and Disable quantize for act_fake_quant
[MQBENCH] INFO: Enable observer and Disable quantize for act_fake_quant
[MQBENCH] INFO: Enable observer and Disable quantize for act_fake_quant
Traceback (most recent call last):
  File "/opt/conda/lib/python3.8/runpy.py", line 194, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/opt/conda/lib/python3.8/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/data/lsc/United-Perception/up/__main__.py", line 27, in <module>
    main()
  File "/data/lsc/United-Perception/up/__main__.py", line 21, in main
    args.run(args)
  File "/data/lsc/United-Perception/up/commands/train.py", line 144, in _main
    launch(main, args.num_gpus_per_machine, args.num_machines, args=args, start_method=args.fork_method)
  File "/data/lsc/United-Perception/up/utils/env/launch.py", line 52, in launch
    mp.start_processes(
  File "/opt/conda/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 188, in start_processes
    while not context.join():
  File "/opt/conda/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 150, in join
    raise ProcessRaisedException(msg, error_index, failed_process.pid)
torch.multiprocessing.spawn.ProcessRaisedException: 

-- Process 3 terminated with the following error:
Traceback (most recent call last):
  File "/opt/conda/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 59, in _wrap
    fn(i, *args)
  File "/data/lsc/United-Perception/up/utils/env/launch.py", line 117, in _distributed_worker
    main_func(args)
  File "/data/lsc/United-Perception/up/commands/train.py", line 134, in main
    runner = RUNNER_REGISTRY.get(runner_cfg['type'])(cfg, **runner_cfg['kwargs'])
  File "/data/lsc/United-Perception/up/tasks/quant/runner/quant_runner.py", line 17, in __init__
    super(QuantRunner, self).__init__(config, work_dir, training)
  File "/data/lsc/United-Perception/up/runner/base_runner.py", line 59, in __init__
    self.build()
  File "/data/lsc/United-Perception/up/tasks/quant/runner/quant_runner.py", line 34, in build
    self.calibrate()
  File "/data/lsc/United-Perception/up/tasks/quant/runner/quant_runner.py", line 182, in calibrate
    self.model(batch)
  File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/data/lsc/United-Perception/up/tasks/quant/models/model_helper.py", line 76, in forward
    output = submodule(input)
  File "/opt/conda/lib/python3.8/site-packages/torch/fx/graph_module.py", line 308, in wrapped_call
    return cls_call(self, *args, **kwargs)
  File "/opt/conda/lib/python3.8/site-packages/torch/fx/graph_module.py", line 308, in wrapped_call
    return cls_call(self, *args, **kwargs)
  File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
    result = self.forward(*input, **kwargs)
  File "<eval_with_key_2>", line 4, in forward
    input_1_post_act_fake_quantizer = self.input_1_post_act_fake_quantizer(input_1);  input_1 = None
  File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/data/lsc/United-Perception/MQBench/mqbench/fake_quantize/fixed.py", line 20, in forward
    self.activation_post_process(X.detach())
AttributeError: 'dict' object has no attribute 'detach'

辛苦帮忙看下是什么问题?是mqbench还没有支持tengine么

PannenetsF commented 2 years ago

I've produce a minimal code snippets

import torch
from torchvision.models import resnet18 

from mqbench.prepare_by_platform import BackendType, prepare_by_platform

class model(torch.nn.Module):
    def __init__(self) -> None:
        super().__init__()
        self.conv1 = torch.nn.Conv2d(3,3,3)
        self.conv2 = torch.nn.Conv2d(3,3,3)
        self.conv3 = torch.nn.Conv2d(3,3,3)
    def forward(self, x):
        data = x['img']
        x.update({'conv1': self.conv1(data)})
        x.update({'conv2': self.conv2(data)})
        x.update({'conv3': self.conv3(data)})
        return x 

test_model = model()
test_model = prepare_by_platform(test_model, BackendType.Tengine_u8)
print(test_model)
test_model({'img': torch.rand(1,3,224,224)})

And I fixed it by https://github.com/PannenetsF/MQBench/tree/tu8

PannenetsF commented 2 years ago

I wonder if this fix the issue, so please provide the print(model.code)'s output after prepare_by_platform.

RedHandLM commented 2 years ago
No module named 'petrel_client'
init petrel failed
2022-08-01 10:37:35,210-rk0-normalize.py#44:import error No module named 'msbench'; If you need Msbench to prune model,     you should add Msbench to this project. Or just ignore this error.
No module named 'spring_aux'
2022-08-01 10:37:35,702-rk0-spconv_backbone.py#17:import error No module named 'spconv'; If you need spconv, you should install spconv !!!. Or just ignore this error
2022-08-01 10:37:35,872-rk0-launch.py#86:Rank 0 initialization finished.
2022-08-01 10:37:35,882-rk0-launch.py#86:Rank 1 initialization finished.
2022-08-01 10:37:35,890-rk0-launch.py#86:Rank 3 initialization finished.
2022-08-01 10:37:35,891-rk0-launch.py#86:Rank 2 initialization finished.
node memory info before build {'node_mem_total': 125.752, 'node_mem_used': 24.171, 'node_mem_used_percent': 19.9, 'node_swap_mem_total': 0.0, 'node_swap_mem_used_percent': 0.0}
2022-08-01 10:37:42,970-rk0-base_runner.py#228:world size:4
2022-08-01 10:37:43,113-rk0-base_runner.py#274:current git version 0.2.0_github-8-gbe6a433
2022-08-01 10:37:43,357-rk0-base_runner.py#177:build train:train done
2022-08-01 10:37:43,389-rk0-data_builder.py#46:We use dist_test instead of dist for test
2022-08-01 10:37:43,389-rk0-base_runner.py#177:build test:test done
QuantModelHelper(
  (backbone_neck_roi_head): ModelHelper(
    (backbone): CSPDarknet(
      (stem): Focus(
        (space2depth): Space2Depth()
        (conv_block): ConvBnAct(
          (conv): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (bn1): FrozenBatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (act): SiLU()
        )
      )
      (dark2): Sequential(
        (0): ConvBnAct(
          (conv): Conv2d(32, 64, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
          (bn1): FrozenBatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (act): SiLU()
        )
        (1): CSPLayer(
          (conv1): ConvBnAct(
            (conv): Conv2d(64, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn1): FrozenBatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (act): SiLU()
          )
          (conv2): ConvBnAct(
            (conv): Conv2d(64, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn1): FrozenBatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (act): SiLU()
          )
          (conv3): ConvBnAct(
            (conv): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn1): FrozenBatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (act): SiLU()
          )
          (m): Sequential(
            (0): Bottleneck(
              (conv1): ConvBnAct(
                (conv): Conv2d(32, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)
                (bn1): FrozenBatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                (act): SiLU()
              )
              (conv2): ConvBnAct(
                (conv): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
                (bn1): FrozenBatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                (act): SiLU()
              )
            )
          )
        )
      )
      (dark3): Sequential(
        (0): ConvBnAct(
          (conv): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
          (bn1): FrozenBatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (act): SiLU()
        )
        (1): CSPLayer(
          (conv1): ConvBnAct(
            (conv): Conv2d(128, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn1): FrozenBatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (act): SiLU()
          )
          (conv2): ConvBnAct(
            (conv): Conv2d(128, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn1): FrozenBatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (act): SiLU()
          )
          (conv3): ConvBnAct(
            (conv): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn1): FrozenBatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (act): SiLU()
          )
          (m): Sequential(
            (0): Bottleneck(
              (conv1): ConvBnAct(
                (conv): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
                (bn1): FrozenBatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                (act): SiLU()
              )
              (conv2): ConvBnAct(
                (conv): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
                (bn1): FrozenBatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                (act): SiLU()
              )
            )
            (1): Bottleneck(
              (conv1): ConvBnAct(
                (conv): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
                (bn1): FrozenBatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                (act): SiLU()
              )
              (conv2): ConvBnAct(
                (conv): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
                (bn1): FrozenBatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                (act): SiLU()
              )
            )
            (2): Bottleneck(
              (conv1): ConvBnAct(
                (conv): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
                (bn1): FrozenBatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                (act): SiLU()
              )
              (conv2): ConvBnAct(
                (conv): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
                (bn1): FrozenBatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                (act): SiLU()
              )
            )
          )
        )
      )
      (dark4): Sequential(
        (0): ConvBnAct(
          (conv): Conv2d(128, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
          (bn1): FrozenBatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (act): SiLU()
        )
        (1): CSPLayer(
          (conv1): ConvBnAct(
            (conv): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn1): FrozenBatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (act): SiLU()
          )
          (conv2): ConvBnAct(
            (conv): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn1): FrozenBatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (act): SiLU()
          )
          (conv3): ConvBnAct(
            (conv): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn1): FrozenBatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (act): SiLU()
          )
          (m): Sequential(
            (0): Bottleneck(
              (conv1): ConvBnAct(
                (conv): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
                (bn1): FrozenBatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                (act): SiLU()
              )
              (conv2): ConvBnAct(
                (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
                (bn1): FrozenBatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                (act): SiLU()
              )
            )
            (1): Bottleneck(
              (conv1): ConvBnAct(
                (conv): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
                (bn1): FrozenBatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                (act): SiLU()
              )
              (conv2): ConvBnAct(
                (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
                (bn1): FrozenBatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                (act): SiLU()
              )
            )
            (2): Bottleneck(
              (conv1): ConvBnAct(
                (conv): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
                (bn1): FrozenBatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                (act): SiLU()
              )
              (conv2): ConvBnAct(
                (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
                (bn1): FrozenBatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                (act): SiLU()
              )
            )
          )
        )
      )
      (dark5): Sequential(
        (0): ConvBnAct(
          (conv): Conv2d(256, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
          (bn1): FrozenBatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (act): SiLU()
        )
        (1): SPP(
          (conv_block1): ConvBnAct(
            (conv): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn1): FrozenBatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (act): SiLU()
          )
          (conv_block2): ConvBnAct(
            (conv): Conv2d(1024, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn1): FrozenBatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (act): SiLU()
          )
          (pooling_blocks): ModuleList(
            (0): MaxPool2d(kernel_size=5, stride=1, padding=2, dilation=1, ceil_mode=False)
            (1): MaxPool2d(kernel_size=9, stride=1, padding=4, dilation=1, ceil_mode=False)
            (2): MaxPool2d(kernel_size=13, stride=1, padding=6, dilation=1, ceil_mode=False)
          )
        )
        (2): CSPLayer(
          (conv1): ConvBnAct(
            (conv): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn1): FrozenBatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (act): SiLU()
          )
          (conv2): ConvBnAct(
            (conv): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn1): FrozenBatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (act): SiLU()
          )
          (conv3): ConvBnAct(
            (conv): Conv2d(512, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn1): FrozenBatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (act): SiLU()
          )
          (m): Sequential(
            (0): Bottleneck(
              (conv1): ConvBnAct(
                (conv): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
                (bn1): FrozenBatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                (act): SiLU()
              )
              (conv2): ConvBnAct(
                (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
                (bn1): FrozenBatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                (act): SiLU()
              )
            )
          )
        )
      )
    )
    (neck): YoloxPAFPN(
      (lateral_conv0): ConvBnAct(
        (conv): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn1): FrozenBatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (act): SiLU()
      )
      (C3_p4): CSPLayer(
        (conv1): ConvBnAct(
          (conv): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn1): FrozenBatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (act): SiLU()
        )
        (conv2): ConvBnAct(
          (conv): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn1): FrozenBatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (act): SiLU()
        )
        (conv3): ConvBnAct(
          (conv): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn1): FrozenBatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (act): SiLU()
        )
        (m): Sequential(
          (0): Bottleneck(
            (conv1): ConvBnAct(
              (conv): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
              (bn1): FrozenBatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
              (act): SiLU()
            )
            (conv2): ConvBnAct(
              (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
              (bn1): FrozenBatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
              (act): SiLU()
            )
          )
        )
      )
      (reduce_conv1): ConvBnAct(
        (conv): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn1): FrozenBatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (act): SiLU()
      )
      (C3_p3): CSPLayer(
        (conv1): ConvBnAct(
          (conv): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn1): FrozenBatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (act): SiLU()
        )
        (conv2): ConvBnAct(
          (conv): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn1): FrozenBatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (act): SiLU()
        )
        (conv3): ConvBnAct(
          (conv): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn1): FrozenBatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (act): SiLU()
        )
        (m): Sequential(
          (0): Bottleneck(
            (conv1): ConvBnAct(
              (conv): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
              (bn1): FrozenBatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
              (act): SiLU()
            )
            (conv2): ConvBnAct(
              (conv): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
              (bn1): FrozenBatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
              (act): SiLU()
            )
          )
        )
      )
      (bu_conv2): ConvBnAct(
        (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
        (bn1): FrozenBatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (act): SiLU()
      )
      (C3_n3): CSPLayer(
        (conv1): ConvBnAct(
          (conv): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn1): FrozenBatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (act): SiLU()
        )
        (conv2): ConvBnAct(
          (conv): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn1): FrozenBatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (act): SiLU()
        )
        (conv3): ConvBnAct(
          (conv): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn1): FrozenBatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (act): SiLU()
        )
        (m): Sequential(
          (0): Bottleneck(
            (conv1): ConvBnAct(
              (conv): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
              (bn1): FrozenBatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
              (act): SiLU()
            )
            (conv2): ConvBnAct(
              (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
              (bn1): FrozenBatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
              (act): SiLU()
            )
          )
        )
      )
      (bu_conv1): ConvBnAct(
        (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
        (bn1): FrozenBatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (act): SiLU()
      )
      (C3_n4): CSPLayer(
        (conv1): ConvBnAct(
          (conv): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn1): FrozenBatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (act): SiLU()
        )
        (conv2): ConvBnAct(
          (conv): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn1): FrozenBatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (act): SiLU()
        )
        (conv3): ConvBnAct(
          (conv): Conv2d(512, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn1): FrozenBatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (act): SiLU()
        )
        (m): Sequential(
          (0): Bottleneck(
            (conv1): ConvBnAct(
              (conv): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
              (bn1): FrozenBatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
              (act): SiLU()
            )
            (conv2): ConvBnAct(
              (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
              (bn1): FrozenBatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
              (act): SiLU()
            )
          )
        )
      )
    )
    (roi_head): YoloXHead(
      (cls_convs): ModuleList(
        (0): Sequential(
          (0): ConvBnAct(
            (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn1): FrozenBatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (act): SiLU()
          )
          (1): ConvBnAct(
            (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn1): FrozenBatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (act): SiLU()
          )
        )
        (1): Sequential(
          (0): ConvBnAct(
            (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn1): FrozenBatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (act): SiLU()
          )
          (1): ConvBnAct(
            (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn1): FrozenBatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (act): SiLU()
          )
        )
        (2): Sequential(
          (0): ConvBnAct(
            (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn1): FrozenBatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (act): SiLU()
          )
          (1): ConvBnAct(
            (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn1): FrozenBatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (act): SiLU()
          )
        )
      )
      (reg_convs): ModuleList(
        (0): Sequential(
          (0): ConvBnAct(
            (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn1): FrozenBatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (act): SiLU()
          )
          (1): ConvBnAct(
            (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn1): FrozenBatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (act): SiLU()
          )
        )
        (1): Sequential(
          (0): ConvBnAct(
            (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn1): FrozenBatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (act): SiLU()
          )
          (1): ConvBnAct(
            (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn1): FrozenBatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (act): SiLU()
          )
        )
        (2): Sequential(
          (0): ConvBnAct(
            (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn1): FrozenBatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (act): SiLU()
          )
          (1): ConvBnAct(
            (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn1): FrozenBatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (act): SiLU()
          )
        )
      )
      (cls_preds): ModuleList(
        (0): Conv2d(128, 12, kernel_size=(1, 1), stride=(1, 1))
        (1): Conv2d(128, 12, kernel_size=(1, 1), stride=(1, 1))
        (2): Conv2d(128, 12, kernel_size=(1, 1), stride=(1, 1))
      )
      (reg_preds): ModuleList(
        (0): Conv2d(128, 4, kernel_size=(1, 1), stride=(1, 1))
        (1): Conv2d(128, 4, kernel_size=(1, 1), stride=(1, 1))
        (2): Conv2d(128, 4, kernel_size=(1, 1), stride=(1, 1))
      )
      (obj_preds): ModuleList(
        (0): Conv2d(128, 1, kernel_size=(1, 1), stride=(1, 1))
        (1): Conv2d(128, 1, kernel_size=(1, 1), stride=(1, 1))
        (2): Conv2d(128, 1, kernel_size=(1, 1), stride=(1, 1))
      )
      (stems): ModuleList(
        (0): ConvBnAct(
          (conv): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn1): FrozenBatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (act): SiLU()
        )
        (1): ConvBnAct(
          (conv): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn1): FrozenBatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (act): SiLU()
        )
        (2): ConvBnAct(
          (conv): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn1): FrozenBatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (act): SiLU()
        )
      )
    )
  )
  (post_process): ModelHelper(
    (post_process): IOUPostProcess(
      (cls_loss): QualityFocalLoss()
      (iou_branch_loss): SigmoidCrossEntropyLoss()
    )
  )
)
['backbone_neck_roi_head']
{'backbone': 'backbone_neck_roi_head', 'neck': 'backbone_neck_roi_head', 'roi_head': 'backbone_neck_roi_head', 'post_process': 'post_process'}
2022-08-01 10:37:44,637-rk0-base_runner.py#295:build hooks done
2022-08-01 10:37:44,637-rk0-saver_helper.py#60:Not found any valid checkpoint yet
2022-08-01 10:37:44,637-rk0-saver_helper.py#63:Load checkpoint from /data/lsc/United-Perception/train_config/pretrain/300_65_ckpt_best.pth
================strict    False
================strict    False
================strict    False
[MQBENCH] INFO: Quantize model Scheme: BackendType.Tengine_u8 Mode: Training
[MQBENCH] INFO: Weight Qconfig:
    FakeQuantize: FixedFakeQuantize Params: {}
    Oberver:      MinMaxObserver Params: Symmetric: False / Bitwidth: 8 / Per channel: False / Pot scale: False / Extra kwargs: {}
[MQBENCH] INFO: Activation Qconfig:
    FakeQuantize: FixedFakeQuantize Params: {}
    Oberver:      EMAMinMaxObserver Params: Symmetric: False / Bitwidth: 8 / Per channel: False / Pot scale: False / Extra kwargs: {}
[MQBENCH] INFO: Quantize model Scheme: BackendType.Tengine_u8 Mode: Training
[MQBENCH] INFO: Weight Qconfig:
    FakeQuantize: FixedFakeQuantize Params: {}
    Oberver:      MinMaxObserver Params: Symmetric: False / Bitwidth: 8 / Per channel: False / Pot scale: False / Extra kwargs: {}
[MQBENCH] INFO: Activation Qconfig:
    FakeQuantize: FixedFakeQuantize Params: {}
    Oberver:      EMAMinMaxObserver Params: Symmetric: False / Bitwidth: 8 / Per channel: False / Pot scale: False / Extra kwargs: {}
[MQBENCH] INFO: Quantize model Scheme: BackendType.Tengine_u8 Mode: Training
[MQBENCH] INFO: Weight Qconfig:
    FakeQuantize: FixedFakeQuantize Params: {}
    Oberver:      MinMaxObserver Params: Symmetric: False / Bitwidth: 8 / Per channel: False / Pot scale: False / Extra kwargs: {}
[MQBENCH] INFO: Activation Qconfig:
    FakeQuantize: FixedFakeQuantize Params: {}
    Oberver:      EMAMinMaxObserver Params: Symmetric: False / Bitwidth: 8 / Per channel: False / Pot scale: False / Extra kwargs: {}
================strict    False
2022-08-01 10:37:46,990-rk0-quant_runner.py#126:prepare quantize model
[MQBENCH] INFO: Quantize model Scheme: BackendType.Tengine_u8 Mode: Training
[MQBENCH] INFO: Weight Qconfig:
    FakeQuantize: FixedFakeQuantize Params: {}
    Oberver:      MinMaxObserver Params: Symmetric: False / Bitwidth: 8 / Per channel: False / Pot scale: False / Extra kwargs: {}
[MQBENCH] INFO: Activation Qconfig:
    FakeQuantize: FixedFakeQuantize Params: {}
    Oberver:      EMAMinMaxObserver Params: Symmetric: False / Bitwidth: 8 / Per channel: False / Pot scale: False / Extra kwargs: {}
[MQBENCH] INFO: Replace module to qat module.
[MQBENCH] INFO: Replace module to qat module.
[MQBENCH] INFO: Replace module to qat module.
[MQBENCH] INFO: Replace module to qat module.
[MQBENCH] INFO: Insert act quant backbone_stem_space2depth_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_stem_space2depth_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_stem_conv_block_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_stem_space2depth_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_stem_conv_block_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark2_0_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_stem_conv_block_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark2_0_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark2_1_conv1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark2_0_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark2_1_conv1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark2_1_m_0_conv1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark2_1_conv1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark2_1_m_0_conv1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark2_1_m_0_conv2_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark2_1_m_0_conv1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark2_1_m_0_conv2_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant add_1_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark2_1_m_0_conv2_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant add_1_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark2_1_conv2_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant add_1_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark2_1_conv2_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant cat_1_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark2_1_conv2_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant cat_1_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark2_1_conv3_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant cat_1_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark2_1_conv3_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark3_0_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark2_1_conv3_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark3_1_conv1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark3_0_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark3_0_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark3_1_conv1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark3_1_m_0_conv1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark3_1_conv1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark3_1_m_0_conv2_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark3_1_m_0_conv1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark3_1_m_0_conv1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark3_1_m_0_conv2_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant add_2_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark3_1_m_0_conv2_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark3_1_m_1_conv1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant add_2_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant add_2_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark3_1_m_1_conv2_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark3_1_m_1_conv1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark3_1_m_1_conv1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant add_3_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark3_1_m_1_conv2_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark3_1_m_1_conv2_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark3_1_m_2_conv1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant add_3_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant add_3_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark3_1_m_2_conv2_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark3_1_m_2_conv1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark3_1_m_2_conv1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant add_4_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark3_1_m_2_conv2_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark3_1_m_2_conv2_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark3_1_conv2_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant add_4_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant add_4_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant cat_2_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark3_1_conv2_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark3_1_conv2_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark3_1_conv3_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant cat_2_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant cat_2_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark4_0_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark3_1_conv3_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark3_1_conv3_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark4_1_conv1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark4_0_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark4_0_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark4_1_m_0_conv1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark4_1_conv1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark4_1_conv1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark4_1_m_0_conv2_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark4_1_m_0_conv1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark4_1_m_0_conv1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant add_5_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark4_1_m_0_conv2_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark4_1_m_0_conv2_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark4_1_m_1_conv1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant add_5_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant add_5_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark4_1_m_1_conv2_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark4_1_m_1_conv1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark4_1_m_1_conv1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant add_6_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark4_1_m_1_conv2_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark4_1_m_1_conv2_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark4_1_m_2_conv1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant add_6_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant add_6_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark4_1_m_2_conv2_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark4_1_m_2_conv1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark4_1_m_2_conv1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant add_7_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark4_1_m_2_conv2_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark4_1_m_2_conv2_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark4_1_conv2_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant add_7_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant add_7_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant cat_3_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark4_1_conv2_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark4_1_conv2_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark4_1_conv3_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant cat_3_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant cat_3_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark5_0_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark4_1_conv3_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark4_1_conv3_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark5_1_conv_block1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark5_0_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark5_0_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant cat_4_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark5_1_conv_block1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark5_1_conv_block1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark5_1_conv_block2_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant cat_4_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant cat_4_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark5_2_conv1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark5_1_conv_block2_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark5_1_conv_block2_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark5_2_m_0_conv1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark5_2_conv1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark5_2_m_0_conv2_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark5_2_conv1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark5_2_m_0_conv1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark5_2_conv2_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark5_2_m_0_conv1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant cat_5_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark5_2_m_0_conv2_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark5_2_m_0_conv2_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark5_2_conv2_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant getitem_4_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark5_2_conv2_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant cat_5_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_lateral_conv0_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant cat_5_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant cat_6_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant getitem_4_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant getitem_4_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_p4_conv1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_lateral_conv0_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_lateral_conv0_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_p4_m_0_conv1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant cat_6_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant cat_6_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_p4_m_0_conv2_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_p4_conv1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_p4_conv1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant add_8_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_p4_m_0_conv1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_p4_m_0_conv1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_p4_conv2_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_p4_m_0_conv2_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_p4_m_0_conv2_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant cat_7_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant add_8_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant add_8_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_p4_conv3_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_p4_conv2_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_p4_conv2_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_reduce_conv1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant cat_7_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant cat_7_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant cat_8_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_p4_conv3_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_p4_conv3_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_p3_conv1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_reduce_conv1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_reduce_conv1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_p3_m_0_conv1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant cat_8_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant cat_8_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_p3_m_0_conv2_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_p3_conv1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_p3_conv1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant add_9_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_p3_m_0_conv1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_p3_m_0_conv1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_p3_conv2_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_p3_m_0_conv2_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_p3_m_0_conv2_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant cat_9_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant add_9_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant add_9_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_p3_conv3_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_p3_conv2_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_p3_conv2_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant cat_10_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant cat_9_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant cat_9_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_p3_conv3_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_n3_conv1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_p3_conv3_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_n3_m_0_conv1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant cat_10_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant cat_10_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_n3_m_0_conv2_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_n3_conv1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_n3_conv1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant add_10_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_n3_m_0_conv1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_n3_m_0_conv1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_n3_conv2_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_n3_m_0_conv2_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_n3_m_0_conv2_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant cat_11_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant add_10_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant add_10_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_n3_conv3_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_n3_conv2_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_n3_conv2_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant cat_12_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant cat_11_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant cat_11_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_n4_conv1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_n3_conv3_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_n3_conv3_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant cat_12_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant cat_12_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_n4_conv1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_n4_conv1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_n4_m_0_conv1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_n4_m_0_conv1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_n4_m_0_conv2_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_n4_m_0_conv1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_n4_m_0_conv2_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant add_11_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_n4_m_0_conv2_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant add_11_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_n4_conv2_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant add_11_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_n4_conv2_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant cat_13_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_n4_conv2_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant cat_13_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant getitem_8_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant cat_13_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant getitem_8_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_stems_0_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant getitem_8_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_stems_0_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_cls_convs_0_0_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_stems_0_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_cls_convs_0_0_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_cls_convs_0_0_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_reg_convs_0_0_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_reg_convs_0_0_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_reg_convs_0_0_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_cls_convs_0_1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_cls_convs_0_1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_cls_convs_0_1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_reg_convs_0_1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_reg_convs_0_1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_reg_convs_0_1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant getitem_9_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant getitem_9_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant getitem_9_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_stems_1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_stems_1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_stems_1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_cls_convs_1_0_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_cls_convs_1_0_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_cls_convs_1_0_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_reg_convs_1_0_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_reg_convs_1_0_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_reg_convs_1_0_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_cls_convs_1_1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_cls_convs_1_1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_cls_convs_1_1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_reg_convs_1_1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_reg_convs_1_1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_reg_convs_1_1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant getitem_10_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant getitem_10_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant getitem_10_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_stems_2_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_stems_2_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_stems_2_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_cls_convs_2_0_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_cls_convs_2_0_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_reg_convs_2_0_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_cls_convs_2_0_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_reg_convs_2_0_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_cls_convs_2_1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_reg_convs_2_0_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_cls_convs_2_1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_reg_convs_2_1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_cls_convs_2_1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_reg_convs_2_1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_stem_conv_block_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_reg_convs_2_1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_stem_conv_block_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark2_0_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_stem_conv_block_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark2_0_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark2_1_conv1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark2_0_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark2_1_conv1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark2_1_conv2_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark2_1_conv1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark2_1_conv2_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark2_1_m_0_conv1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark2_1_conv2_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark2_1_m_0_conv1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark2_1_m_0_conv2_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark2_1_m_0_conv1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark2_1_m_0_conv2_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark2_1_conv3_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark2_1_m_0_conv2_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark2_1_conv3_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark3_0_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark2_1_conv3_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark3_0_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark3_1_conv1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark3_0_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark3_1_conv1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark3_1_conv2_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark3_1_conv1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_stem_space2depth_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark3_1_conv2_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark3_1_m_0_conv1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark3_1_conv2_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark3_1_m_0_conv1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark3_1_m_0_conv2_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_stem_conv_block_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark3_1_m_0_conv1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark3_1_m_0_conv2_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark3_1_m_1_conv1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark2_0_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark3_1_m_0_conv2_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark3_1_m_1_conv1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark3_1_m_1_conv2_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark2_1_conv1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark3_1_m_1_conv1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark3_1_m_1_conv2_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark3_1_m_2_conv1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark2_1_m_0_conv1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark3_1_m_1_conv2_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark3_1_m_2_conv1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark3_1_m_2_conv2_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark2_1_m_0_conv2_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark3_1_m_2_conv1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark3_1_m_2_conv2_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark3_1_conv3_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant add_1_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark3_1_m_2_conv2_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark3_1_conv3_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark4_0_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark2_1_conv2_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark3_1_conv3_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark4_0_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark4_1_conv1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant cat_1_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark4_1_conv1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark4_0_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark4_1_conv2_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark2_1_conv3_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark4_1_conv2_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark4_1_conv1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark4_1_m_0_conv1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark3_0_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark4_1_m_0_conv1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark4_1_conv2_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark4_1_m_0_conv2_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark3_1_conv1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark4_1_m_0_conv2_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark4_1_m_0_conv1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark3_1_m_0_conv1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark4_1_m_1_conv1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark4_1_m_1_conv1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark4_1_m_0_conv2_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark3_1_m_0_conv2_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark4_1_m_1_conv2_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark4_1_m_1_conv2_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark4_1_m_1_conv1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant add_2_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark4_1_m_2_conv1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark4_1_m_2_conv1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark4_1_m_1_conv2_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark3_1_m_1_conv1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark4_1_m_2_conv2_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark4_1_m_2_conv2_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark3_1_m_1_conv2_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark4_1_m_2_conv1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark4_1_conv3_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark4_1_conv3_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant add_3_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark4_1_m_2_conv2_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark5_0_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark5_0_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark3_1_m_2_conv1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark5_1_conv_block1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark4_1_conv3_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark5_1_conv_block1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark3_1_m_2_conv2_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark5_0_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark5_1_pooling_blocks_0_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark5_1_pooling_blocks_0_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant add_4_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark5_1_pooling_blocks_1_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark5_1_conv_block1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark5_1_pooling_blocks_1_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark3_1_conv2_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark5_1_pooling_blocks_2_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark5_1_pooling_blocks_0_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark5_1_pooling_blocks_2_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant cat_2_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark5_1_conv_block2_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark5_1_pooling_blocks_1_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark5_1_conv_block2_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark3_1_conv3_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark5_2_conv1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark5_1_pooling_blocks_2_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark5_2_conv1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark4_0_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark5_2_conv2_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark5_1_conv_block2_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark5_2_conv2_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark4_1_conv1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark5_2_m_0_conv1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark5_2_conv1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark5_2_m_0_conv1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark4_1_m_0_conv1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark5_2_m_0_conv2_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark5_2_conv2_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark5_2_m_0_conv2_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark4_1_m_0_conv2_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark5_2_conv3_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark5_2_m_0_conv1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark5_2_conv3_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant add_5_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_lateral_conv0_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark5_2_m_0_conv2_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_lateral_conv0_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark4_1_m_1_conv1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant interpolate_1_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark5_2_conv3_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant interpolate_1_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark4_1_m_1_conv2_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_p4_conv1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_lateral_conv0_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_p4_conv1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant add_6_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_p4_conv2_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant interpolate_1_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_p4_conv2_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark4_1_m_2_conv1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_p4_m_0_conv1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_p4_m_0_conv1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_p4_conv1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark4_1_m_2_conv2_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_p4_m_0_conv2_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_p4_m_0_conv2_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_p4_conv2_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant add_7_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_p4_conv3_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_p4_conv3_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark4_1_conv2_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_p4_m_0_conv1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_reduce_conv1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant cat_3_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_reduce_conv1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_p4_m_0_conv2_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant interpolate_2_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark4_1_conv3_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant interpolate_2_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_p4_conv3_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_p3_conv1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark5_0_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_p3_conv1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_reduce_conv1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_p3_conv2_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark5_1_conv_block1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_p3_conv2_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant interpolate_2_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_p3_m_0_conv1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant cat_4_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_p3_m_0_conv1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_p3_conv1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_p3_m_0_conv2_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark5_1_conv_block2_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_p3_m_0_conv2_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_p3_conv2_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_p3_conv3_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark5_2_conv1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_p3_conv3_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_p3_m_0_conv1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark5_2_m_0_conv1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_bu_conv2_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_bu_conv2_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_p3_m_0_conv2_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark5_2_m_0_conv2_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_n3_conv1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_n3_conv1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_p3_conv3_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark5_2_conv2_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_n3_conv2_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_n3_conv2_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_bu_conv2_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant cat_5_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_n3_m_0_conv1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_n3_m_0_conv1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_n3_conv1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant getitem_4_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_n3_m_0_conv2_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_n3_m_0_conv2_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_n3_conv2_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_lateral_conv0_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_n3_conv3_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_n3_conv3_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_n3_m_0_conv1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant cat_6_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_bu_conv1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_bu_conv1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_n3_m_0_conv2_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_p4_conv1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_n4_conv1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_n4_conv1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_n3_conv3_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_p4_m_0_conv1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_n4_conv2_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_n4_conv2_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_bu_conv1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_p4_m_0_conv2_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_n4_m_0_conv1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_n4_m_0_conv1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_n4_conv1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant add_8_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_n4_m_0_conv2_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_n4_m_0_conv2_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_n4_conv2_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_p4_conv2_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_n4_conv3_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_n4_conv3_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant cat_7_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_n4_m_0_conv1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_stems_0_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_stems_0_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_p4_conv3_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_n4_m_0_conv2_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_cls_convs_0_0_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_cls_convs_0_0_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_reduce_conv1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_n4_conv3_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_cls_convs_0_1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_cls_convs_0_1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant cat_8_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_stems_0_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_reg_convs_0_0_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_reg_convs_0_0_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_p3_conv1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_cls_convs_0_0_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_reg_convs_0_1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_reg_convs_0_1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_p3_m_0_conv1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_cls_preds_0_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_cls_convs_0_1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_cls_preds_0_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_p3_m_0_conv2_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_reg_preds_0_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_reg_convs_0_0_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_reg_preds_0_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant add_9_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_obj_preds_0_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_reg_convs_0_1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_obj_preds_0_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_p3_conv2_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_stems_1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_cls_preds_0_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_stems_1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant cat_9_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_cls_convs_1_0_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_reg_preds_0_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_cls_convs_1_0_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_p3_conv3_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_cls_convs_1_1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_obj_preds_0_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_cls_convs_1_1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant cat_10_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_reg_convs_1_0_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_reg_convs_1_0_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_stems_1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_n3_conv1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_reg_convs_1_1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_reg_convs_1_1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_cls_convs_1_0_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_n3_m_0_conv1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_cls_preds_1_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_cls_preds_1_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_cls_convs_1_1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_n3_m_0_conv2_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_reg_preds_1_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_reg_preds_1_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_reg_convs_1_0_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant add_10_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_obj_preds_1_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_obj_preds_1_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_n3_conv2_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_reg_convs_1_1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_stems_2_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_stems_2_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant cat_11_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_cls_preds_1_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_cls_convs_2_0_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_cls_convs_2_0_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_n3_conv3_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_reg_preds_1_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_cls_convs_2_1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_cls_convs_2_1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant cat_12_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_obj_preds_1_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_reg_convs_2_0_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_n4_conv1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_stems_2_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_reg_convs_2_0_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_n4_m_0_conv1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_reg_convs_2_1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_cls_convs_2_0_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_reg_convs_2_1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_n4_m_0_conv2_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_cls_preds_2_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_cls_convs_2_1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_cls_preds_2_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant add_11_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_reg_preds_2_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_reg_preds_2_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_reg_convs_2_0_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_n4_conv2_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_obj_preds_2_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_obj_preds_2_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_reg_convs_2_1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant cat_13_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant input_1_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant input_1_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_cls_preds_2_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant getitem_8_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_reg_preds_2_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_stems_0_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_obj_preds_2_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_cls_convs_0_0_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant input_1_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_reg_convs_0_0_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_cls_convs_0_1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_reg_convs_0_1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant getitem_9_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_stems_1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_cls_convs_1_0_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_reg_convs_1_0_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_cls_convs_1_1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_reg_convs_1_1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant getitem_10_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_stems_2_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_cls_convs_2_0_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_reg_convs_2_0_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_cls_convs_2_1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_reg_convs_2_1_act_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_stem_conv_block_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark2_0_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark2_1_conv1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark2_1_conv2_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark2_1_m_0_conv1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark2_1_m_0_conv2_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark2_1_conv3_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark3_0_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark3_1_conv1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark3_1_conv2_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark3_1_m_0_conv1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark3_1_m_0_conv2_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark3_1_m_1_conv1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark3_1_m_1_conv2_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark3_1_m_2_conv1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark3_1_m_2_conv2_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark3_1_conv3_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark4_0_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark4_1_conv1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark4_1_conv2_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark4_1_m_0_conv1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark4_1_m_0_conv2_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark4_1_m_1_conv1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark4_1_m_1_conv2_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark4_1_m_2_conv1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark4_1_m_2_conv2_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark4_1_conv3_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark5_0_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark5_1_conv_block1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark5_1_pooling_blocks_0_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark5_1_pooling_blocks_1_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark5_1_pooling_blocks_2_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark5_1_conv_block2_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark5_2_conv1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark5_2_conv2_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark5_2_m_0_conv1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark5_2_m_0_conv2_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant backbone_dark5_2_conv3_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_lateral_conv0_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant interpolate_1_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_p4_conv1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_p4_conv2_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_p4_m_0_conv1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_p4_m_0_conv2_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_p4_conv3_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_reduce_conv1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant interpolate_2_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_p3_conv1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_p3_conv2_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_p3_m_0_conv1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_p3_m_0_conv2_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_p3_conv3_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_bu_conv2_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_n3_conv1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_n3_conv2_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_n3_m_0_conv1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_n3_m_0_conv2_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_n3_conv3_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_bu_conv1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_n4_conv1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_n4_conv2_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_n4_m_0_conv1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_n4_m_0_conv2_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant neck_c3_n4_conv3_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_stems_0_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_cls_convs_0_0_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_cls_convs_0_1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_reg_convs_0_0_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_reg_convs_0_1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_cls_preds_0_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_reg_preds_0_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_obj_preds_0_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_stems_1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_cls_convs_1_0_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_cls_convs_1_1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_reg_convs_1_0_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_reg_convs_1_1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_cls_preds_1_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_reg_preds_1_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_obj_preds_1_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_stems_2_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_cls_convs_2_0_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_cls_convs_2_1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_reg_convs_2_0_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_reg_convs_2_1_conv_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_cls_preds_2_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_reg_preds_2_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant roi_head_obj_preds_2_post_act_fake_quantizer
[MQBENCH] INFO: Insert act quant input_1_post_act_fake_quantizer
GraphModule(
  (backbone): Module(
    (stem): Module(
      (space2depth): Space2Depth()
      (conv_block): Module(
        (conv): ConvFreezebn2d(
          12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
          (bn): FrozenBatchNorm2d(32, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
          (weight_fake_quant): FixedFakeQuantize(
            fake_quant_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), observer_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.], device='cuda:0'), zero_point=tensor([0], device='cuda:0', dtype=torch.int32)
            (activation_post_process): MinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
          )
        )
        (act): SiLU()
      )
    )
    (dark2): Module(
      (0): Module(
        (conv): ConvFreezebn2d(
          32, 64, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False
          (bn): FrozenBatchNorm2d(64, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
          (weight_fake_quant): FixedFakeQuantize(
            fake_quant_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), observer_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.], device='cuda:0'), zero_point=tensor([0], device='cuda:0', dtype=torch.int32)
            (activation_post_process): MinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
          )
        )
        (act): SiLU()
      )
      (1): Module(
        (conv1): Module(
          (conv): ConvFreezebn2d(
            64, 32, kernel_size=(1, 1), stride=(1, 1), bias=False
            (bn): FrozenBatchNorm2d(32, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (weight_fake_quant): FixedFakeQuantize(
              fake_quant_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), observer_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.], device='cuda:0'), zero_point=tensor([0], device='cuda:0', dtype=torch.int32)
              (activation_post_process): MinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
            )
          )
          (act): SiLU()
        )
        (conv2): Module(
          (conv): ConvFreezebn2d(
            64, 32, kernel_size=(1, 1), stride=(1, 1), bias=False
            (bn): FrozenBatchNorm2d(32, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (weight_fake_quant): FixedFakeQuantize(
              fake_quant_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), observer_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.], device='cuda:0'), zero_point=tensor([0], device='cuda:0', dtype=torch.int32)
              (activation_post_process): MinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
            )
          )
          (act): SiLU()
        )
        (m): Module(
          (0): Module(
            (conv1): Module(
              (conv): ConvFreezebn2d(
                32, 32, kernel_size=(1, 1), stride=(1, 1), bias=False
                (bn): FrozenBatchNorm2d(32, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
                (weight_fake_quant): FixedFakeQuantize(
                  fake_quant_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), observer_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.], device='cuda:0'), zero_point=tensor([0], device='cuda:0', dtype=torch.int32)
                  (activation_post_process): MinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
                )
              )
              (act): SiLU()
            )
            (conv2): Module(
              (conv): ConvFreezebn2d(
                32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
                (bn): FrozenBatchNorm2d(32, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
                (weight_fake_quant): FixedFakeQuantize(
                  fake_quant_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), observer_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.], device='cuda:0'), zero_point=tensor([0], device='cuda:0', dtype=torch.int32)
                  (activation_post_process): MinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
                )
              )
              (act): SiLU()
            )
          )
        )
        (conv3): Module(
          (conv): ConvFreezebn2d(
            64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False
            (bn): FrozenBatchNorm2d(64, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (weight_fake_quant): FixedFakeQuantize(
              fake_quant_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), observer_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.], device='cuda:0'), zero_point=tensor([0], device='cuda:0', dtype=torch.int32)
              (activation_post_process): MinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
            )
          )
          (act): SiLU()
        )
      )
    )
    (dark3): Module(
      (0): Module(
        (conv): ConvFreezebn2d(
          64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False
          (bn): FrozenBatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
          (weight_fake_quant): FixedFakeQuantize(
            fake_quant_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), observer_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.], device='cuda:0'), zero_point=tensor([0], device='cuda:0', dtype=torch.int32)
            (activation_post_process): MinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
          )
        )
        (act): SiLU()
      )
      (1): Module(
        (conv1): Module(
          (conv): ConvFreezebn2d(
            128, 64, kernel_size=(1, 1), stride=(1, 1), bias=False
            (bn): FrozenBatchNorm2d(64, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (weight_fake_quant): FixedFakeQuantize(
              fake_quant_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), observer_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.], device='cuda:0'), zero_point=tensor([0], device='cuda:0', dtype=torch.int32)
              (activation_post_process): MinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
            )
          )
          (act): SiLU()
        )
        (conv2): Module(
          (conv): ConvFreezebn2d(
            128, 64, kernel_size=(1, 1), stride=(1, 1), bias=False
            (bn): FrozenBatchNorm2d(64, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (weight_fake_quant): FixedFakeQuantize(
              fake_quant_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), observer_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.], device='cuda:0'), zero_point=tensor([0], device='cuda:0', dtype=torch.int32)
              (activation_post_process): MinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
            )
          )
          (act): SiLU()
        )
        (m): Module(
          (0): Module(
            (conv1): Module(
              (conv): ConvFreezebn2d(
                64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False
                (bn): FrozenBatchNorm2d(64, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
                (weight_fake_quant): FixedFakeQuantize(
                  fake_quant_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), observer_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.], device='cuda:0'), zero_point=tensor([0], device='cuda:0', dtype=torch.int32)
                  (activation_post_process): MinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
                )
              )
              (act): SiLU()
            )
            (conv2): Module(
              (conv): ConvFreezebn2d(
                64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
                (bn): FrozenBatchNorm2d(64, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
                (weight_fake_quant): FixedFakeQuantize(
                  fake_quant_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), observer_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.], device='cuda:0'), zero_point=tensor([0], device='cuda:0', dtype=torch.int32)
                  (activation_post_process): MinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
                )
              )
              (act): SiLU()
            )
          )
          (1): Module(
            (conv1): Module(
              (conv): ConvFreezebn2d(
                64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False
                (bn): FrozenBatchNorm2d(64, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
                (weight_fake_quant): FixedFakeQuantize(
                  fake_quant_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), observer_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.], device='cuda:0'), zero_point=tensor([0], device='cuda:0', dtype=torch.int32)
                  (activation_post_process): MinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
                )
              )
              (act): SiLU()
            )
            (conv2): Module(
              (conv): ConvFreezebn2d(
                64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
                (bn): FrozenBatchNorm2d(64, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
                (weight_fake_quant): FixedFakeQuantize(
                  fake_quant_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), observer_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.], device='cuda:0'), zero_point=tensor([0], device='cuda:0', dtype=torch.int32)
                  (activation_post_process): MinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
                )
              )
              (act): SiLU()
            )
          )
          (2): Module(
            (conv1): Module(
              (conv): ConvFreezebn2d(
                64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False
                (bn): FrozenBatchNorm2d(64, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
                (weight_fake_quant): FixedFakeQuantize(
                  fake_quant_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), observer_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.], device='cuda:0'), zero_point=tensor([0], device='cuda:0', dtype=torch.int32)
                  (activation_post_process): MinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
                )
              )
              (act): SiLU()
            )
            (conv2): Module(
              (conv): ConvFreezebn2d(
                64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
                (bn): FrozenBatchNorm2d(64, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
                (weight_fake_quant): FixedFakeQuantize(
                  fake_quant_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), observer_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.], device='cuda:0'), zero_point=tensor([0], device='cuda:0', dtype=torch.int32)
                  (activation_post_process): MinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
                )
              )
              (act): SiLU()
            )
          )
        )
        (conv3): Module(
          (conv): ConvFreezebn2d(
            128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False
            (bn): FrozenBatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (weight_fake_quant): FixedFakeQuantize(
              fake_quant_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), observer_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.], device='cuda:0'), zero_point=tensor([0], device='cuda:0', dtype=torch.int32)
              (activation_post_process): MinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
            )
          )
          (act): SiLU()
        )
      )
    )
    (dark4): Module(
      (0): Module(
        (conv): ConvFreezebn2d(
          128, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False
          (bn): FrozenBatchNorm2d(256, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
          (weight_fake_quant): FixedFakeQuantize(
            fake_quant_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), observer_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.], device='cuda:0'), zero_point=tensor([0], device='cuda:0', dtype=torch.int32)
            (activation_post_process): MinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
          )
        )
        (act): SiLU()
      )
      (1): Module(
        (conv1): Module(
          (conv): ConvFreezebn2d(
            256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False
            (bn): FrozenBatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (weight_fake_quant): FixedFakeQuantize(
              fake_quant_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), observer_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.], device='cuda:0'), zero_point=tensor([0], device='cuda:0', dtype=torch.int32)
              (activation_post_process): MinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
            )
          )
          (act): SiLU()
        )
        (conv2): Module(
          (conv): ConvFreezebn2d(
            256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False
            (bn): FrozenBatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (weight_fake_quant): FixedFakeQuantize(
              fake_quant_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), observer_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.], device='cuda:0'), zero_point=tensor([0], device='cuda:0', dtype=torch.int32)
              (activation_post_process): MinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
            )
          )
          (act): SiLU()
        )
        (m): Module(
          (0): Module(
            (conv1): Module(
              (conv): ConvFreezebn2d(
                128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False
                (bn): FrozenBatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
                (weight_fake_quant): FixedFakeQuantize(
                  fake_quant_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), observer_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.], device='cuda:0'), zero_point=tensor([0], device='cuda:0', dtype=torch.int32)
                  (activation_post_process): MinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
                )
              )
              (act): SiLU()
            )
            (conv2): Module(
              (conv): ConvFreezebn2d(
                128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
                (bn): FrozenBatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
                (weight_fake_quant): FixedFakeQuantize(
                  fake_quant_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), observer_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.], device='cuda:0'), zero_point=tensor([0], device='cuda:0', dtype=torch.int32)
                  (activation_post_process): MinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
                )
              )
              (act): SiLU()
            )
          )
          (1): Module(
            (conv1): Module(
              (conv): ConvFreezebn2d(
                128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False
                (bn): FrozenBatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
                (weight_fake_quant): FixedFakeQuantize(
                  fake_quant_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), observer_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.], device='cuda:0'), zero_point=tensor([0], device='cuda:0', dtype=torch.int32)
                  (activation_post_process): MinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
                )
              )
              (act): SiLU()
            )
            (conv2): Module(
              (conv): ConvFreezebn2d(
                128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
                (bn): FrozenBatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
                (weight_fake_quant): FixedFakeQuantize(
                  fake_quant_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), observer_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.], device='cuda:0'), zero_point=tensor([0], device='cuda:0', dtype=torch.int32)
                  (activation_post_process): MinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
                )
              )
              (act): SiLU()
            )
          )
          (2): Module(
            (conv1): Module(
              (conv): ConvFreezebn2d(
                128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False
                (bn): FrozenBatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
                (weight_fake_quant): FixedFakeQuantize(
                  fake_quant_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), observer_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.], device='cuda:0'), zero_point=tensor([0], device='cuda:0', dtype=torch.int32)
                  (activation_post_process): MinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
                )
              )
              (act): SiLU()
            )
            (conv2): Module(
              (conv): ConvFreezebn2d(
                128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
                (bn): FrozenBatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
                (weight_fake_quant): FixedFakeQuantize(
                  fake_quant_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), observer_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.], device='cuda:0'), zero_point=tensor([0], device='cuda:0', dtype=torch.int32)
                  (activation_post_process): MinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
                )
              )
              (act): SiLU()
            )
          )
        )
        (conv3): Module(
          (conv): ConvFreezebn2d(
            256, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
            (bn): FrozenBatchNorm2d(256, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (weight_fake_quant): FixedFakeQuantize(
              fake_quant_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), observer_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.], device='cuda:0'), zero_point=tensor([0], device='cuda:0', dtype=torch.int32)
              (activation_post_process): MinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
            )
          )
          (act): SiLU()
        )
      )
    )
    (dark5): Module(
      (0): Module(
        (conv): ConvFreezebn2d(
          256, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False
          (bn): FrozenBatchNorm2d(512, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
          (weight_fake_quant): FixedFakeQuantize(
            fake_quant_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), observer_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.], device='cuda:0'), zero_point=tensor([0], device='cuda:0', dtype=torch.int32)
            (activation_post_process): MinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
          )
        )
        (act): SiLU()
      )
      (1): Module(
        (conv_block1): Module(
          (conv): ConvFreezebn2d(
            512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
            (bn): FrozenBatchNorm2d(256, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (weight_fake_quant): FixedFakeQuantize(
              fake_quant_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), observer_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.], device='cuda:0'), zero_point=tensor([0], device='cuda:0', dtype=torch.int32)
              (activation_post_process): MinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
            )
          )
          (act): SiLU()
        )
        (pooling_blocks): Module(
          (0): MaxPool2d(kernel_size=5, stride=1, padding=2, dilation=1, ceil_mode=False)
          (1): MaxPool2d(kernel_size=9, stride=1, padding=4, dilation=1, ceil_mode=False)
          (2): MaxPool2d(kernel_size=13, stride=1, padding=6, dilation=1, ceil_mode=False)
        )
        (conv_block2): Module(
          (conv): ConvFreezebn2d(
            1024, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
            (bn): FrozenBatchNorm2d(512, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (weight_fake_quant): FixedFakeQuantize(
              fake_quant_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), observer_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.], device='cuda:0'), zero_point=tensor([0], device='cuda:0', dtype=torch.int32)
              (activation_post_process): MinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
            )
          )
          (act): SiLU()
        )
      )
      (2): Module(
        (conv1): Module(
          (conv): ConvFreezebn2d(
            512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
            (bn): FrozenBatchNorm2d(256, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (weight_fake_quant): FixedFakeQuantize(
              fake_quant_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), observer_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.], device='cuda:0'), zero_point=tensor([0], device='cuda:0', dtype=torch.int32)
              (activation_post_process): MinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
            )
          )
          (act): SiLU()
        )
        (conv2): Module(
          (conv): ConvFreezebn2d(
            512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
            (bn): FrozenBatchNorm2d(256, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (weight_fake_quant): FixedFakeQuantize(
              fake_quant_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), observer_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.], device='cuda:0'), zero_point=tensor([0], device='cuda:0', dtype=torch.int32)
              (activation_post_process): MinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
            )
          )
          (act): SiLU()
        )
        (m): Module(
          (0): Module(
            (conv1): Module(
              (conv): ConvFreezebn2d(
                256, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
                (bn): FrozenBatchNorm2d(256, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
                (weight_fake_quant): FixedFakeQuantize(
                  fake_quant_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), observer_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.], device='cuda:0'), zero_point=tensor([0], device='cuda:0', dtype=torch.int32)
                  (activation_post_process): MinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
                )
              )
              (act): SiLU()
            )
            (conv2): Module(
              (conv): ConvFreezebn2d(
                256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
                (bn): FrozenBatchNorm2d(256, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
                (weight_fake_quant): FixedFakeQuantize(
                  fake_quant_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), observer_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.], device='cuda:0'), zero_point=tensor([0], device='cuda:0', dtype=torch.int32)
                  (activation_post_process): MinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
                )
              )
              (act): SiLU()
            )
          )
        )
        (conv3): Module(
          (conv): ConvFreezebn2d(
            512, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
            (bn): FrozenBatchNorm2d(512, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (weight_fake_quant): FixedFakeQuantize(
              fake_quant_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), observer_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.], device='cuda:0'), zero_point=tensor([0], device='cuda:0', dtype=torch.int32)
              (activation_post_process): MinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
            )
          )
          (act): SiLU()
        )
      )
    )
  )
  (neck): Module(
    (lateral_conv0): Module(
      (conv): ConvFreezebn2d(
        512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
        (bn): FrozenBatchNorm2d(256, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
        (weight_fake_quant): FixedFakeQuantize(
          fake_quant_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), observer_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.], device='cuda:0'), zero_point=tensor([0], device='cuda:0', dtype=torch.int32)
          (activation_post_process): MinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
        )
      )
      (act): SiLU()
    )
    (C3_p4): Module(
      (conv1): Module(
        (conv): ConvFreezebn2d(
          512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False
          (bn): FrozenBatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
          (weight_fake_quant): FixedFakeQuantize(
            fake_quant_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), observer_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.], device='cuda:0'), zero_point=tensor([0], device='cuda:0', dtype=torch.int32)
            (activation_post_process): MinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
          )
        )
        (act): SiLU()
      )
      (conv2): Module(
        (conv): ConvFreezebn2d(
          512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False
          (bn): FrozenBatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
          (weight_fake_quant): FixedFakeQuantize(
            fake_quant_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), observer_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.], device='cuda:0'), zero_point=tensor([0], device='cuda:0', dtype=torch.int32)
            (activation_post_process): MinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
          )
        )
        (act): SiLU()
      )
      (m): Module(
        (0): Module(
          (conv1): Module(
            (conv): ConvFreezebn2d(
              128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False
              (bn): FrozenBatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
              (weight_fake_quant): FixedFakeQuantize(
                fake_quant_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), observer_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.], device='cuda:0'), zero_point=tensor([0], device='cuda:0', dtype=torch.int32)
                (activation_post_process): MinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
              )
            )
            (act): SiLU()
          )
          (conv2): Module(
            (conv): ConvFreezebn2d(
              128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
              (bn): FrozenBatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
              (weight_fake_quant): FixedFakeQuantize(
                fake_quant_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), observer_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.], device='cuda:0'), zero_point=tensor([0], device='cuda:0', dtype=torch.int32)
                (activation_post_process): MinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
              )
            )
            (act): SiLU()
          )
        )
      )
      (conv3): Module(
        (conv): ConvFreezebn2d(
          256, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
          (bn): FrozenBatchNorm2d(256, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
          (weight_fake_quant): FixedFakeQuantize(
            fake_quant_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), observer_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.], device='cuda:0'), zero_point=tensor([0], device='cuda:0', dtype=torch.int32)
            (activation_post_process): MinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
          )
        )
        (act): SiLU()
      )
    )
    (reduce_conv1): Module(
      (conv): ConvFreezebn2d(
        256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False
        (bn): FrozenBatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
        (weight_fake_quant): FixedFakeQuantize(
          fake_quant_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), observer_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.], device='cuda:0'), zero_point=tensor([0], device='cuda:0', dtype=torch.int32)
          (activation_post_process): MinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
        )
      )
      (act): SiLU()
    )
    (C3_p3): Module(
      (conv1): Module(
        (conv): ConvFreezebn2d(
          256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False
          (bn): FrozenBatchNorm2d(64, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
          (weight_fake_quant): FixedFakeQuantize(
            fake_quant_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), observer_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.], device='cuda:0'), zero_point=tensor([0], device='cuda:0', dtype=torch.int32)
            (activation_post_process): MinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
          )
        )
        (act): SiLU()
      )
      (conv2): Module(
        (conv): ConvFreezebn2d(
          256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False
          (bn): FrozenBatchNorm2d(64, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
          (weight_fake_quant): FixedFakeQuantize(
            fake_quant_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), observer_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.], device='cuda:0'), zero_point=tensor([0], device='cuda:0', dtype=torch.int32)
            (activation_post_process): MinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
          )
        )
        (act): SiLU()
      )
      (m): Module(
        (0): Module(
          (conv1): Module(
            (conv): ConvFreezebn2d(
              64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False
              (bn): FrozenBatchNorm2d(64, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
              (weight_fake_quant): FixedFakeQuantize(
                fake_quant_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), observer_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.], device='cuda:0'), zero_point=tensor([0], device='cuda:0', dtype=torch.int32)
                (activation_post_process): MinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
              )
            )
            (act): SiLU()
          )
          (conv2): Module(
            (conv): ConvFreezebn2d(
              64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
              (bn): FrozenBatchNorm2d(64, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
              (weight_fake_quant): FixedFakeQuantize(
                fake_quant_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), observer_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.], device='cuda:0'), zero_point=tensor([0], device='cuda:0', dtype=torch.int32)
                (activation_post_process): MinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
              )
            )
            (act): SiLU()
          )
        )
      )
      (conv3): Module(
        (conv): ConvFreezebn2d(
          128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False
          (bn): FrozenBatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
          (weight_fake_quant): FixedFakeQuantize(
            fake_quant_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), observer_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.], device='cuda:0'), zero_point=tensor([0], device='cuda:0', dtype=torch.int32)
            (activation_post_process): MinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
          )
        )
        (act): SiLU()
      )
    )
    (bu_conv2): Module(
      (conv): ConvFreezebn2d(
        128, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False
        (bn): FrozenBatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
        (weight_fake_quant): FixedFakeQuantize(
          fake_quant_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), observer_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.], device='cuda:0'), zero_point=tensor([0], device='cuda:0', dtype=torch.int32)
          (activation_post_process): MinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
        )
      )
      (act): SiLU()
    )
    (C3_n3): Module(
      (conv1): Module(
        (conv): ConvFreezebn2d(
          256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False
          (bn): FrozenBatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
          (weight_fake_quant): FixedFakeQuantize(
            fake_quant_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), observer_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.], device='cuda:0'), zero_point=tensor([0], device='cuda:0', dtype=torch.int32)
            (activation_post_process): MinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
          )
        )
        (act): SiLU()
      )
      (conv2): Module(
        (conv): ConvFreezebn2d(
          256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False
          (bn): FrozenBatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
          (weight_fake_quant): FixedFakeQuantize(
            fake_quant_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), observer_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.], device='cuda:0'), zero_point=tensor([0], device='cuda:0', dtype=torch.int32)
            (activation_post_process): MinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
          )
        )
        (act): SiLU()
      )
      (m): Module(
        (0): Module(
          (conv1): Module(
            (conv): ConvFreezebn2d(
              128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False
              (bn): FrozenBatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
              (weight_fake_quant): FixedFakeQuantize(
                fake_quant_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), observer_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.], device='cuda:0'), zero_point=tensor([0], device='cuda:0', dtype=torch.int32)
                (activation_post_process): MinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
              )
            )
            (act): SiLU()
          )
RedHandLM commented 2 years ago
          (conv2): Module(
            (conv): ConvFreezebn2d(
              128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
              (bn): FrozenBatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
              (weight_fake_quant): FixedFakeQuantize(
                fake_quant_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), observer_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.], device='cuda:0'), zero_point=tensor([0], device='cuda:0', dtype=torch.int32)
                (activation_post_process): MinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
              )
            )
            (act): SiLU()
          )
        )
      )
      (conv3): Module(
        (conv): ConvFreezebn2d(
          256, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
          (bn): FrozenBatchNorm2d(256, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
          (weight_fake_quant): FixedFakeQuantize(
            fake_quant_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), observer_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.], device='cuda:0'), zero_point=tensor([0], device='cuda:0', dtype=torch.int32)
            (activation_post_process): MinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
          )
        )
        (act): SiLU()
      )
    )
    (bu_conv1): Module(
      (conv): ConvFreezebn2d(
        256, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False
        (bn): FrozenBatchNorm2d(256, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
        (weight_fake_quant): FixedFakeQuantize(
          fake_quant_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), observer_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.], device='cuda:0'), zero_point=tensor([0], device='cuda:0', dtype=torch.int32)
          (activation_post_process): MinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
        )
      )
      (act): SiLU()
    )
    (C3_n4): Module(
      (conv1): Module(
        (conv): ConvFreezebn2d(
          512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
          (bn): FrozenBatchNorm2d(256, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
          (weight_fake_quant): FixedFakeQuantize(
            fake_quant_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), observer_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.], device='cuda:0'), zero_point=tensor([0], device='cuda:0', dtype=torch.int32)
            (activation_post_process): MinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
          )
        )
        (act): SiLU()
      )
      (conv2): Module(
        (conv): ConvFreezebn2d(
          512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
          (bn): FrozenBatchNorm2d(256, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
          (weight_fake_quant): FixedFakeQuantize(
            fake_quant_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), observer_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.], device='cuda:0'), zero_point=tensor([0], device='cuda:0', dtype=torch.int32)
            (activation_post_process): MinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
          )
        )
        (act): SiLU()
      )
      (m): Module(
        (0): Module(
          (conv1): Module(
            (conv): ConvFreezebn2d(
              256, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
              (bn): FrozenBatchNorm2d(256, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
              (weight_fake_quant): FixedFakeQuantize(
                fake_quant_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), observer_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.], device='cuda:0'), zero_point=tensor([0], device='cuda:0', dtype=torch.int32)
                (activation_post_process): MinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
              )
            )
            (act): SiLU()
          )
          (conv2): Module(
            (conv): ConvFreezebn2d(
              256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
              (bn): FrozenBatchNorm2d(256, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
              (weight_fake_quant): FixedFakeQuantize(
                fake_quant_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), observer_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.], device='cuda:0'), zero_point=tensor([0], device='cuda:0', dtype=torch.int32)
                (activation_post_process): MinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
              )
            )
            (act): SiLU()
          )
        )
      )
      (conv3): Module(
        (conv): ConvFreezebn2d(
          512, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
          (bn): FrozenBatchNorm2d(512, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
          (weight_fake_quant): FixedFakeQuantize(
            fake_quant_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), observer_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.], device='cuda:0'), zero_point=tensor([0], device='cuda:0', dtype=torch.int32)
            (activation_post_process): MinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
          )
        )
        (act): SiLU()
      )
    )
  )
  (roi_head): Module(
    (stems): Module(
      (0): Module(
        (conv): ConvFreezebn2d(
          128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False
          (bn): FrozenBatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
          (weight_fake_quant): FixedFakeQuantize(
            fake_quant_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), observer_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.], device='cuda:0'), zero_point=tensor([0], device='cuda:0', dtype=torch.int32)
            (activation_post_process): MinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
          )
        )
        (act): SiLU()
      )
      (1): Module(
        (conv): ConvFreezebn2d(
          256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False
          (bn): FrozenBatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
          (weight_fake_quant): FixedFakeQuantize(
            fake_quant_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), observer_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.], device='cuda:0'), zero_point=tensor([0], device='cuda:0', dtype=torch.int32)
            (activation_post_process): MinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
          )
        )
        (act): SiLU()
      )
      (2): Module(
        (conv): ConvFreezebn2d(
          512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False
          (bn): FrozenBatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
          (weight_fake_quant): FixedFakeQuantize(
            fake_quant_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), observer_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.], device='cuda:0'), zero_point=tensor([0], device='cuda:0', dtype=torch.int32)
            (activation_post_process): MinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
          )
        )
        (act): SiLU()
      )
    )
    (cls_convs): Module(
      (0): Module(
        (0): Module(
          (conv): ConvFreezebn2d(
            128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
            (bn): FrozenBatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (weight_fake_quant): FixedFakeQuantize(
              fake_quant_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), observer_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.], device='cuda:0'), zero_point=tensor([0], device='cuda:0', dtype=torch.int32)
              (activation_post_process): MinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
            )
          )
          (act): SiLU()
        )
        (1): Module(
          (conv): ConvFreezebn2d(
            128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
            (bn): FrozenBatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (weight_fake_quant): FixedFakeQuantize(
              fake_quant_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), observer_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.], device='cuda:0'), zero_point=tensor([0], device='cuda:0', dtype=torch.int32)
              (activation_post_process): MinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
            )
          )
          (act): SiLU()
        )
      )
      (1): Module(
        (0): Module(
          (conv): ConvFreezebn2d(
            128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
            (bn): FrozenBatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (weight_fake_quant): FixedFakeQuantize(
              fake_quant_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), observer_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.], device='cuda:0'), zero_point=tensor([0], device='cuda:0', dtype=torch.int32)
              (activation_post_process): MinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
            )
          )
          (act): SiLU()
        )
        (1): Module(
          (conv): ConvFreezebn2d(
            128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
            (bn): FrozenBatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (weight_fake_quant): FixedFakeQuantize(
              fake_quant_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), observer_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.], device='cuda:0'), zero_point=tensor([0], device='cuda:0', dtype=torch.int32)
              (activation_post_process): MinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
            )
          )
          (act): SiLU()
        )
      )
      (2): Module(
        (0): Module(
          (conv): ConvFreezebn2d(
            128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
            (bn): FrozenBatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (weight_fake_quant): FixedFakeQuantize(
              fake_quant_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), observer_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.], device='cuda:0'), zero_point=tensor([0], device='cuda:0', dtype=torch.int32)
              (activation_post_process): MinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
            )
          )
          (act): SiLU()
        )
        (1): Module(
          (conv): ConvFreezebn2d(
            128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
            (bn): FrozenBatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (weight_fake_quant): FixedFakeQuantize(
              fake_quant_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), observer_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.], device='cuda:0'), zero_point=tensor([0], device='cuda:0', dtype=torch.int32)
              (activation_post_process): MinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
            )
          )
          (act): SiLU()
        )
      )
    )
    (reg_convs): Module(
      (0): Module(
        (0): Module(
          (conv): ConvFreezebn2d(
            128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
            (bn): FrozenBatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (weight_fake_quant): FixedFakeQuantize(
              fake_quant_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), observer_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.], device='cuda:0'), zero_point=tensor([0], device='cuda:0', dtype=torch.int32)
              (activation_post_process): MinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
            )
          )
          (act): SiLU()
        )
        (1): Module(
          (conv): ConvFreezebn2d(
            128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
            (bn): FrozenBatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (weight_fake_quant): FixedFakeQuantize(
              fake_quant_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), observer_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.], device='cuda:0'), zero_point=tensor([0], device='cuda:0', dtype=torch.int32)
              (activation_post_process): MinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
            )
          )
          (act): SiLU()
        )
      )
      (1): Module(
        (0): Module(
          (conv): ConvFreezebn2d(
            128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
            (bn): FrozenBatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (weight_fake_quant): FixedFakeQuantize(
              fake_quant_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), observer_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.], device='cuda:0'), zero_point=tensor([0], device='cuda:0', dtype=torch.int32)
              (activation_post_process): MinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
            )
          )
          (act): SiLU()
        )
        (1): Module(
          (conv): ConvFreezebn2d(
            128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
            (bn): FrozenBatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (weight_fake_quant): FixedFakeQuantize(
              fake_quant_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), observer_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.], device='cuda:0'), zero_point=tensor([0], device='cuda:0', dtype=torch.int32)
              (activation_post_process): MinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
            )
          )
          (act): SiLU()
        )
      )
      (2): Module(
        (0): Module(
          (conv): ConvFreezebn2d(
            128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
            (bn): FrozenBatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (weight_fake_quant): FixedFakeQuantize(
              fake_quant_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), observer_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.], device='cuda:0'), zero_point=tensor([0], device='cuda:0', dtype=torch.int32)
              (activation_post_process): MinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
            )
          )
          (act): SiLU()
        )
        (1): Module(
          (conv): ConvFreezebn2d(
            128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
            (bn): FrozenBatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (weight_fake_quant): FixedFakeQuantize(
              fake_quant_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), observer_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.], device='cuda:0'), zero_point=tensor([0], device='cuda:0', dtype=torch.int32)
              (activation_post_process): MinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
            )
          )
          (act): SiLU()
        )
      )
    )
    (cls_preds): Module(
      (0): Conv2d(
        128, 12, kernel_size=(1, 1), stride=(1, 1)
        (weight_fake_quant): FixedFakeQuantize(
          fake_quant_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), observer_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.], device='cuda:0'), zero_point=tensor([0], device='cuda:0', dtype=torch.int32)
          (activation_post_process): MinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
        )
      )
      (1): Conv2d(
        128, 12, kernel_size=(1, 1), stride=(1, 1)
        (weight_fake_quant): FixedFakeQuantize(
          fake_quant_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), observer_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.], device='cuda:0'), zero_point=tensor([0], device='cuda:0', dtype=torch.int32)
          (activation_post_process): MinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
        )
      )
      (2): Conv2d(
        128, 12, kernel_size=(1, 1), stride=(1, 1)
        (weight_fake_quant): FixedFakeQuantize(
          fake_quant_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), observer_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.], device='cuda:0'), zero_point=tensor([0], device='cuda:0', dtype=torch.int32)
          (activation_post_process): MinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
        )
      )
    )
    (reg_preds): Module(
      (0): Conv2d(
        128, 4, kernel_size=(1, 1), stride=(1, 1)
        (weight_fake_quant): FixedFakeQuantize(
          fake_quant_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), observer_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.], device='cuda:0'), zero_point=tensor([0], device='cuda:0', dtype=torch.int32)
          (activation_post_process): MinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
        )
      )
      (1): Conv2d(
        128, 4, kernel_size=(1, 1), stride=(1, 1)
        (weight_fake_quant): FixedFakeQuantize(
          fake_quant_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), observer_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.], device='cuda:0'), zero_point=tensor([0], device='cuda:0', dtype=torch.int32)
          (activation_post_process): MinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
        )
      )
      (2): Conv2d(
        128, 4, kernel_size=(1, 1), stride=(1, 1)
        (weight_fake_quant): FixedFakeQuantize(
          fake_quant_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), observer_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.], device='cuda:0'), zero_point=tensor([0], device='cuda:0', dtype=torch.int32)
          (activation_post_process): MinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
        )
      )
    )
    (obj_preds): Module(
      (0): Conv2d(
        128, 1, kernel_size=(1, 1), stride=(1, 1)
        (weight_fake_quant): FixedFakeQuantize(
          fake_quant_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), observer_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.], device='cuda:0'), zero_point=tensor([0], device='cuda:0', dtype=torch.int32)
          (activation_post_process): MinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
        )
      )
      (1): Conv2d(
        128, 1, kernel_size=(1, 1), stride=(1, 1)
        (weight_fake_quant): FixedFakeQuantize(
          fake_quant_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), observer_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.], device='cuda:0'), zero_point=tensor([0], device='cuda:0', dtype=torch.int32)
          (activation_post_process): MinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
        )
      )
      (2): Conv2d(
        128, 1, kernel_size=(1, 1), stride=(1, 1)
        (weight_fake_quant): FixedFakeQuantize(
          fake_quant_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), observer_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.], device='cuda:0'), zero_point=tensor([0], device='cuda:0', dtype=torch.int32)
          (activation_post_process): MinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
        )
      )
    )
  )
  (backbone_stem_space2depth_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (backbone_stem_conv_block_act_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (backbone_dark2_0_act_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (backbone_dark2_1_conv1_act_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (backbone_dark2_1_m_0_conv1_act_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (backbone_dark2_1_m_0_conv2_act_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (add_1_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (backbone_dark2_1_conv2_act_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (cat_1_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (backbone_dark2_1_conv3_act_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (backbone_dark3_0_act_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (backbone_dark3_1_conv1_act_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (backbone_dark3_1_m_0_conv1_act_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (backbone_dark3_1_m_0_conv2_act_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (add_2_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (backbone_dark3_1_m_1_conv1_act_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (backbone_dark3_1_m_1_conv2_act_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (add_3_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (backbone_dark3_1_m_2_conv1_act_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (backbone_dark3_1_m_2_conv2_act_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (add_4_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (backbone_dark3_1_conv2_act_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (cat_2_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (backbone_dark3_1_conv3_act_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (backbone_dark4_0_act_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (backbone_dark4_1_conv1_act_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (backbone_dark4_1_m_0_conv1_act_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (backbone_dark4_1_m_0_conv2_act_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (add_5_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (backbone_dark4_1_m_1_conv1_act_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (backbone_dark4_1_m_1_conv2_act_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (add_6_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (backbone_dark4_1_m_2_conv1_act_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (backbone_dark4_1_m_2_conv2_act_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (add_7_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (backbone_dark4_1_conv2_act_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (cat_3_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (backbone_dark4_1_conv3_act_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (backbone_dark5_0_act_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (backbone_dark5_1_conv_block1_act_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (cat_4_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (backbone_dark5_1_conv_block2_act_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (backbone_dark5_2_conv1_act_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (backbone_dark5_2_m_0_conv1_act_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (backbone_dark5_2_m_0_conv2_act_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (backbone_dark5_2_conv2_act_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (cat_5_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (getitem_4_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (neck_lateral_conv0_act_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (cat_6_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (neck_c3_p4_conv1_act_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (neck_c3_p4_m_0_conv1_act_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (neck_c3_p4_m_0_conv2_act_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (add_8_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (neck_c3_p4_conv2_act_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (cat_7_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (neck_c3_p4_conv3_act_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (neck_reduce_conv1_act_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (cat_8_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (neck_c3_p3_conv1_act_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (neck_c3_p3_m_0_conv1_act_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (neck_c3_p3_m_0_conv2_act_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (add_9_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (neck_c3_p3_conv2_act_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (cat_9_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (neck_c3_p3_conv3_act_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (cat_10_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (neck_c3_n3_conv1_act_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (neck_c3_n3_m_0_conv1_act_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (neck_c3_n3_m_0_conv2_act_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (add_10_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (neck_c3_n3_conv2_act_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (cat_11_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (neck_c3_n3_conv3_act_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (cat_12_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (neck_c3_n4_conv1_act_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (neck_c3_n4_m_0_conv1_act_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (neck_c3_n4_m_0_conv2_act_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (add_11_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (neck_c3_n4_conv2_act_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (cat_13_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (getitem_8_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (roi_head_stems_0_act_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (roi_head_cls_convs_0_0_act_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (roi_head_reg_convs_0_0_act_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (roi_head_cls_convs_0_1_act_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (roi_head_reg_convs_0_1_act_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (getitem_9_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (roi_head_stems_1_act_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (roi_head_cls_convs_1_0_act_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (roi_head_reg_convs_1_0_act_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (roi_head_cls_convs_1_1_act_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (roi_head_reg_convs_1_1_act_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (getitem_10_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (roi_head_stems_2_act_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (roi_head_cls_convs_2_0_act_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (roi_head_reg_convs_2_0_act_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (roi_head_cls_convs_2_1_act_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (roi_head_reg_convs_2_1_act_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (backbone_stem_conv_block_conv_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (backbone_dark2_0_conv_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (backbone_dark2_1_conv1_conv_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (backbone_dark2_1_conv2_conv_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (backbone_dark2_1_m_0_conv1_conv_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (backbone_dark2_1_m_0_conv2_conv_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (backbone_dark2_1_conv3_conv_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (backbone_dark3_0_conv_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (backbone_dark3_1_conv1_conv_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (backbone_dark3_1_conv2_conv_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (backbone_dark3_1_m_0_conv1_conv_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (backbone_dark3_1_m_0_conv2_conv_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (backbone_dark3_1_m_1_conv1_conv_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (backbone_dark3_1_m_1_conv2_conv_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (backbone_dark3_1_m_2_conv1_conv_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (backbone_dark3_1_m_2_conv2_conv_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (backbone_dark3_1_conv3_conv_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (backbone_dark4_0_conv_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (backbone_dark4_1_conv1_conv_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (backbone_dark4_1_conv2_conv_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (backbone_dark4_1_m_0_conv1_conv_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (backbone_dark4_1_m_0_conv2_conv_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (backbone_dark4_1_m_1_conv1_conv_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (backbone_dark4_1_m_1_conv2_conv_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (backbone_dark4_1_m_2_conv1_conv_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (backbone_dark4_1_m_2_conv2_conv_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (backbone_dark4_1_conv3_conv_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (backbone_dark5_0_conv_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (backbone_dark5_1_conv_block1_conv_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (backbone_dark5_1_pooling_blocks_0_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (backbone_dark5_1_pooling_blocks_1_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (backbone_dark5_1_pooling_blocks_2_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (backbone_dark5_1_conv_block2_conv_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (backbone_dark5_2_conv1_conv_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (backbone_dark5_2_conv2_conv_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (backbone_dark5_2_m_0_conv1_conv_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (backbone_dark5_2_m_0_conv2_conv_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (backbone_dark5_2_conv3_conv_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (neck_lateral_conv0_conv_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (interpolate_1_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (neck_c3_p4_conv1_conv_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (neck_c3_p4_conv2_conv_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (neck_c3_p4_m_0_conv1_conv_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (neck_c3_p4_m_0_conv2_conv_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (neck_c3_p4_conv3_conv_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (neck_reduce_conv1_conv_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (interpolate_2_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (neck_c3_p3_conv1_conv_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (neck_c3_p3_conv2_conv_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (neck_c3_p3_m_0_conv1_conv_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (neck_c3_p3_m_0_conv2_conv_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (neck_c3_p3_conv3_conv_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (neck_bu_conv2_conv_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (neck_c3_n3_conv1_conv_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (neck_c3_n3_conv2_conv_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (neck_c3_n3_m_0_conv1_conv_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (neck_c3_n3_m_0_conv2_conv_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (neck_c3_n3_conv3_conv_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (neck_bu_conv1_conv_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (neck_c3_n4_conv1_conv_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (neck_c3_n4_conv2_conv_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (neck_c3_n4_m_0_conv1_conv_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (neck_c3_n4_m_0_conv2_conv_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (neck_c3_n4_conv3_conv_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (roi_head_stems_0_conv_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (roi_head_cls_convs_0_0_conv_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (roi_head_cls_convs_0_1_conv_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (roi_head_reg_convs_0_0_conv_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (roi_head_reg_convs_0_1_conv_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (roi_head_cls_preds_0_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (roi_head_reg_preds_0_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (roi_head_obj_preds_0_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (roi_head_stems_1_conv_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (roi_head_cls_convs_1_0_conv_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (roi_head_cls_convs_1_1_conv_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (roi_head_reg_convs_1_0_conv_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (roi_head_reg_convs_1_1_conv_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (roi_head_cls_preds_1_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (roi_head_reg_preds_1_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (roi_head_obj_preds_1_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (roi_head_stems_2_conv_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (roi_head_cls_convs_2_0_conv_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (roi_head_cls_convs_2_1_conv_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (roi_head_reg_convs_2_0_conv_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (roi_head_reg_convs_2_1_conv_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (roi_head_cls_preds_2_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (roi_head_reg_preds_2_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (roi_head_obj_preds_2_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
  (input_1_post_act_fake_quantizer): FixedFakeQuantize(
    fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0], dtype=torch.int32)
    (activation_post_process): EMAMinMaxObserver(min_val=inf, max_val=-inf ch_axis=-1 pot=False)
  )
)
import torch
def forward(self, input):
    input_1 = input
    input_1_post_act_fake_quantizer = self.input_1_post_act_fake_quantizer(input_1);  input_1 = None
    getitem = input_1_post_act_fake_quantizer['image']
    backbone_stem_space2depth = self.backbone.stem.space2depth(getitem);  getitem = None
    backbone_stem_space2depth_post_act_fake_quantizer = self.backbone_stem_space2depth_post_act_fake_quantizer(backbone_stem_space2depth);  backbone_stem_space2depth = None
    backbone_stem_conv_block_conv = self.backbone.stem.conv_block.conv(backbone_stem_space2depth_post_act_fake_quantizer);  backbone_stem_space2depth_post_act_fake_quantizer = None
    backbone_stem_conv_block_conv_post_act_fake_quantizer = self.backbone_stem_conv_block_conv_post_act_fake_quantizer(backbone_stem_conv_block_conv);  backbone_stem_conv_block_conv = None
    backbone_stem_conv_block_act = self.backbone.stem.conv_block.act(backbone_stem_conv_block_conv_post_act_fake_quantizer);  backbone_stem_conv_block_conv_post_act_fake_quantizer = None
    backbone_stem_conv_block_act_post_act_fake_quantizer = self.backbone_stem_conv_block_act_post_act_fake_quantizer(backbone_stem_conv_block_act);  backbone_stem_conv_block_act = None
    backbone_dark2_0_conv = getattr(self.backbone.dark2, "0").conv(backbone_stem_conv_block_act_post_act_fake_quantizer);  backbone_stem_conv_block_act_post_act_fake_quantizer = None
    backbone_dark2_0_conv_post_act_fake_quantizer = self.backbone_dark2_0_conv_post_act_fake_quantizer(backbone_dark2_0_conv);  backbone_dark2_0_conv = None
    backbone_dark2_0_act = getattr(self.backbone.dark2, "0").act(backbone_dark2_0_conv_post_act_fake_quantizer);  backbone_dark2_0_conv_post_act_fake_quantizer = None
    backbone_dark2_0_act_post_act_fake_quantizer = self.backbone_dark2_0_act_post_act_fake_quantizer(backbone_dark2_0_act);  backbone_dark2_0_act = None
    backbone_dark2_1_conv1_conv = getattr(self.backbone.dark2, "1").conv1.conv(backbone_dark2_0_act_post_act_fake_quantizer)
    backbone_dark2_1_conv1_conv_post_act_fake_quantizer = self.backbone_dark2_1_conv1_conv_post_act_fake_quantizer(backbone_dark2_1_conv1_conv);  backbone_dark2_1_conv1_conv = None
    backbone_dark2_1_conv1_act = getattr(self.backbone.dark2, "1").conv1.act(backbone_dark2_1_conv1_conv_post_act_fake_quantizer);  backbone_dark2_1_conv1_conv_post_act_fake_quantizer = None
    backbone_dark2_1_conv1_act_post_act_fake_quantizer = self.backbone_dark2_1_conv1_act_post_act_fake_quantizer(backbone_dark2_1_conv1_act);  backbone_dark2_1_conv1_act = None
    backbone_dark2_1_conv2_conv = getattr(self.backbone.dark2, "1").conv2.conv(backbone_dark2_0_act_post_act_fake_quantizer);  backbone_dark2_0_act_post_act_fake_quantizer = None
    backbone_dark2_1_conv2_conv_post_act_fake_quantizer = self.backbone_dark2_1_conv2_conv_post_act_fake_quantizer(backbone_dark2_1_conv2_conv);  backbone_dark2_1_conv2_conv = None
    backbone_dark2_1_conv2_act = getattr(self.backbone.dark2, "1").conv2.act(backbone_dark2_1_conv2_conv_post_act_fake_quantizer);  backbone_dark2_1_conv2_conv_post_act_fake_quantizer = None
    backbone_dark2_1_conv2_act_post_act_fake_quantizer = self.backbone_dark2_1_conv2_act_post_act_fake_quantizer(backbone_dark2_1_conv2_act);  backbone_dark2_1_conv2_act = None
    backbone_dark2_1_m_0_conv1_conv = getattr(getattr(self.backbone.dark2, "1").m, "0").conv1.conv(backbone_dark2_1_conv1_act_post_act_fake_quantizer)
    backbone_dark2_1_m_0_conv1_conv_post_act_fake_quantizer = self.backbone_dark2_1_m_0_conv1_conv_post_act_fake_quantizer(backbone_dark2_1_m_0_conv1_conv);  backbone_dark2_1_m_0_conv1_conv = None
    backbone_dark2_1_m_0_conv1_act = getattr(getattr(self.backbone.dark2, "1").m, "0").conv1.act(backbone_dark2_1_m_0_conv1_conv_post_act_fake_quantizer);  backbone_dark2_1_m_0_conv1_conv_post_act_fake_quantizer = None
    backbone_dark2_1_m_0_conv1_act_post_act_fake_quantizer = self.backbone_dark2_1_m_0_conv1_act_post_act_fake_quantizer(backbone_dark2_1_m_0_conv1_act);  backbone_dark2_1_m_0_conv1_act = None
    backbone_dark2_1_m_0_conv2_conv = getattr(getattr(self.backbone.dark2, "1").m, "0").conv2.conv(backbone_dark2_1_m_0_conv1_act_post_act_fake_quantizer);  backbone_dark2_1_m_0_conv1_act_post_act_fake_quantizer = None
    backbone_dark2_1_m_0_conv2_conv_post_act_fake_quantizer = self.backbone_dark2_1_m_0_conv2_conv_post_act_fake_quantizer(backbone_dark2_1_m_0_conv2_conv);  backbone_dark2_1_m_0_conv2_conv = None
    backbone_dark2_1_m_0_conv2_act = getattr(getattr(self.backbone.dark2, "1").m, "0").conv2.act(backbone_dark2_1_m_0_conv2_conv_post_act_fake_quantizer);  backbone_dark2_1_m_0_conv2_conv_post_act_fake_quantizer = None
    backbone_dark2_1_m_0_conv2_act_post_act_fake_quantizer = self.backbone_dark2_1_m_0_conv2_act_post_act_fake_quantizer(backbone_dark2_1_m_0_conv2_act);  backbone_dark2_1_m_0_conv2_act = None
    add_1 = backbone_dark2_1_m_0_conv2_act_post_act_fake_quantizer + backbone_dark2_1_conv1_act_post_act_fake_quantizer;  backbone_dark2_1_m_0_conv2_act_post_act_fake_quantizer = backbone_dark2_1_conv1_act_post_act_fake_quantizer = None
    add_1_post_act_fake_quantizer = self.add_1_post_act_fake_quantizer(add_1);  add_1 = None
    cat_1 = torch.cat((add_1_post_act_fake_quantizer, backbone_dark2_1_conv2_act_post_act_fake_quantizer), dim = 1);  add_1_post_act_fake_quantizer = backbone_dark2_1_conv2_act_post_act_fake_quantizer = None
    cat_1_post_act_fake_quantizer = self.cat_1_post_act_fake_quantizer(cat_1);  cat_1 = None
    backbone_dark2_1_conv3_conv = getattr(self.backbone.dark2, "1").conv3.conv(cat_1_post_act_fake_quantizer);  cat_1_post_act_fake_quantizer = None
    backbone_dark2_1_conv3_conv_post_act_fake_quantizer = self.backbone_dark2_1_conv3_conv_post_act_fake_quantizer(backbone_dark2_1_conv3_conv);  backbone_dark2_1_conv3_conv = None
    backbone_dark2_1_conv3_act = getattr(self.backbone.dark2, "1").conv3.act(backbone_dark2_1_conv3_conv_post_act_fake_quantizer);  backbone_dark2_1_conv3_conv_post_act_fake_quantizer = None
    backbone_dark2_1_conv3_act_post_act_fake_quantizer = self.backbone_dark2_1_conv3_act_post_act_fake_quantizer(backbone_dark2_1_conv3_act);  backbone_dark2_1_conv3_act = None
    backbone_dark3_0_conv = getattr(self.backbone.dark3, "0").conv(backbone_dark2_1_conv3_act_post_act_fake_quantizer);  backbone_dark2_1_conv3_act_post_act_fake_quantizer = None
    backbone_dark3_0_conv_post_act_fake_quantizer = self.backbone_dark3_0_conv_post_act_fake_quantizer(backbone_dark3_0_conv);  backbone_dark3_0_conv = None
    backbone_dark3_0_act = getattr(self.backbone.dark3, "0").act(backbone_dark3_0_conv_post_act_fake_quantizer);  backbone_dark3_0_conv_post_act_fake_quantizer = None
    backbone_dark3_0_act_post_act_fake_quantizer = self.backbone_dark3_0_act_post_act_fake_quantizer(backbone_dark3_0_act);  backbone_dark3_0_act = None
    backbone_dark3_1_conv1_conv = getattr(self.backbone.dark3, "1").conv1.conv(backbone_dark3_0_act_post_act_fake_quantizer)
    backbone_dark3_1_conv1_conv_post_act_fake_quantizer = self.backbone_dark3_1_conv1_conv_post_act_fake_quantizer(backbone_dark3_1_conv1_conv);  backbone_dark3_1_conv1_conv = None
    backbone_dark3_1_conv1_act = getattr(self.backbone.dark3, "1").conv1.act(backbone_dark3_1_conv1_conv_post_act_fake_quantizer);  backbone_dark3_1_conv1_conv_post_act_fake_quantizer = None
    backbone_dark3_1_conv1_act_post_act_fake_quantizer = self.backbone_dark3_1_conv1_act_post_act_fake_quantizer(backbone_dark3_1_conv1_act);  backbone_dark3_1_conv1_act = None
    backbone_dark3_1_conv2_conv = getattr(self.backbone.dark3, "1").conv2.conv(backbone_dark3_0_act_post_act_fake_quantizer);  backbone_dark3_0_act_post_act_fake_quantizer = None
    backbone_dark3_1_conv2_conv_post_act_fake_quantizer = self.backbone_dark3_1_conv2_conv_post_act_fake_quantizer(backbone_dark3_1_conv2_conv);  backbone_dark3_1_conv2_conv = None
    backbone_dark3_1_conv2_act = getattr(self.backbone.dark3, "1").conv2.act(backbone_dark3_1_conv2_conv_post_act_fake_quantizer);  backbone_dark3_1_conv2_conv_post_act_fake_quantizer = None
    backbone_dark3_1_conv2_act_post_act_fake_quantizer = self.backbone_dark3_1_conv2_act_post_act_fake_quantizer(backbone_dark3_1_conv2_act);  backbone_dark3_1_conv2_act = None
    backbone_dark3_1_m_0_conv1_conv = getattr(getattr(self.backbone.dark3, "1").m, "0").conv1.conv(backbone_dark3_1_conv1_act_post_act_fake_quantizer)
    backbone_dark3_1_m_0_conv1_conv_post_act_fake_quantizer = self.backbone_dark3_1_m_0_conv1_conv_post_act_fake_quantizer(backbone_dark3_1_m_0_conv1_conv);  backbone_dark3_1_m_0_conv1_conv = None
    backbone_dark3_1_m_0_conv1_act = getattr(getattr(self.backbone.dark3, "1").m, "0").conv1.act(backbone_dark3_1_m_0_conv1_conv_post_act_fake_quantizer);  backbone_dark3_1_m_0_conv1_conv_post_act_fake_quantizer = None
    backbone_dark3_1_m_0_conv1_act_post_act_fake_quantizer = self.backbone_dark3_1_m_0_conv1_act_post_act_fake_quantizer(backbone_dark3_1_m_0_conv1_act);  backbone_dark3_1_m_0_conv1_act = None
    backbone_dark3_1_m_0_conv2_conv = getattr(getattr(self.backbone.dark3, "1").m, "0").conv2.conv(backbone_dark3_1_m_0_conv1_act_post_act_fake_quantizer);  backbone_dark3_1_m_0_conv1_act_post_act_fake_quantizer = None
    backbone_dark3_1_m_0_conv2_conv_post_act_fake_quantizer = self.backbone_dark3_1_m_0_conv2_conv_post_act_fake_quantizer(backbone_dark3_1_m_0_conv2_conv);  backbone_dark3_1_m_0_conv2_conv = None
    backbone_dark3_1_m_0_conv2_act = getattr(getattr(self.backbone.dark3, "1").m, "0").conv2.act(backbone_dark3_1_m_0_conv2_conv_post_act_fake_quantizer);  backbone_dark3_1_m_0_conv2_conv_post_act_fake_quantizer = None
    backbone_dark3_1_m_0_conv2_act_post_act_fake_quantizer = self.backbone_dark3_1_m_0_conv2_act_post_act_fake_quantizer(backbone_dark3_1_m_0_conv2_act);  backbone_dark3_1_m_0_conv2_act = None
    add_2 = backbone_dark3_1_m_0_conv2_act_post_act_fake_quantizer + backbone_dark3_1_conv1_act_post_act_fake_quantizer;  backbone_dark3_1_m_0_conv2_act_post_act_fake_quantizer = backbone_dark3_1_conv1_act_post_act_fake_quantizer = None
    add_2_post_act_fake_quantizer = self.add_2_post_act_fake_quantizer(add_2);  add_2 = None
    backbone_dark3_1_m_1_conv1_conv = getattr(getattr(self.backbone.dark3, "1").m, "1").conv1.conv(add_2_post_act_fake_quantizer)
    backbone_dark3_1_m_1_conv1_conv_post_act_fake_quantizer = self.backbone_dark3_1_m_1_conv1_conv_post_act_fake_quantizer(backbone_dark3_1_m_1_conv1_conv);  backbone_dark3_1_m_1_conv1_conv = None
    backbone_dark3_1_m_1_conv1_act = getattr(getattr(self.backbone.dark3, "1").m, "1").conv1.act(backbone_dark3_1_m_1_conv1_conv_post_act_fake_quantizer);  backbone_dark3_1_m_1_conv1_conv_post_act_fake_quantizer = None
    backbone_dark3_1_m_1_conv1_act_post_act_fake_quantizer = self.backbone_dark3_1_m_1_conv1_act_post_act_fake_quantizer(backbone_dark3_1_m_1_conv1_act);  backbone_dark3_1_m_1_conv1_act = None
    backbone_dark3_1_m_1_conv2_conv = getattr(getattr(self.backbone.dark3, "1").m, "1").conv2.conv(backbone_dark3_1_m_1_conv1_act_post_act_fake_quantizer);  backbone_dark3_1_m_1_conv1_act_post_act_fake_quantizer = None
    backbone_dark3_1_m_1_conv2_conv_post_act_fake_quantizer = self.backbone_dark3_1_m_1_conv2_conv_post_act_fake_quantizer(backbone_dark3_1_m_1_conv2_conv);  backbone_dark3_1_m_1_conv2_conv = None
    backbone_dark3_1_m_1_conv2_act = getattr(getattr(self.backbone.dark3, "1").m, "1").conv2.act(backbone_dark3_1_m_1_conv2_conv_post_act_fake_quantizer);  backbone_dark3_1_m_1_conv2_conv_post_act_fake_quantizer = None
    backbone_dark3_1_m_1_conv2_act_post_act_fake_quantizer = self.backbone_dark3_1_m_1_conv2_act_post_act_fake_quantizer(backbone_dark3_1_m_1_conv2_act);  backbone_dark3_1_m_1_conv2_act = None
    add_3 = backbone_dark3_1_m_1_conv2_act_post_act_fake_quantizer + add_2_post_act_fake_quantizer;  backbone_dark3_1_m_1_conv2_act_post_act_fake_quantizer = add_2_post_act_fake_quantizer = None
    add_3_post_act_fake_quantizer = self.add_3_post_act_fake_quantizer(add_3);  add_3 = None
    backbone_dark3_1_m_2_conv1_conv = getattr(getattr(self.backbone.dark3, "1").m, "2").conv1.conv(add_3_post_act_fake_quantizer)
    backbone_dark3_1_m_2_conv1_conv_post_act_fake_quantizer = self.backbone_dark3_1_m_2_conv1_conv_post_act_fake_quantizer(backbone_dark3_1_m_2_conv1_conv);  backbone_dark3_1_m_2_conv1_conv = None
    backbone_dark3_1_m_2_conv1_act = getattr(getattr(self.backbone.dark3, "1").m, "2").conv1.act(backbone_dark3_1_m_2_conv1_conv_post_act_fake_quantizer);  backbone_dark3_1_m_2_conv1_conv_post_act_fake_quantizer = None
    backbone_dark3_1_m_2_conv1_act_post_act_fake_quantizer = self.backbone_dark3_1_m_2_conv1_act_post_act_fake_quantizer(backbone_dark3_1_m_2_conv1_act);  backbone_dark3_1_m_2_conv1_act = None
    backbone_dark3_1_m_2_conv2_conv = getattr(getattr(self.backbone.dark3, "1").m, "2").conv2.conv(backbone_dark3_1_m_2_conv1_act_post_act_fake_quantizer);  backbone_dark3_1_m_2_conv1_act_post_act_fake_quantizer = None
    backbone_dark3_1_m_2_conv2_conv_post_act_fake_quantizer = self.backbone_dark3_1_m_2_conv2_conv_post_act_fake_quantizer(backbone_dark3_1_m_2_conv2_conv);  backbone_dark3_1_m_2_conv2_conv = None
    backbone_dark3_1_m_2_conv2_act = getattr(getattr(self.backbone.dark3, "1").m, "2").conv2.act(backbone_dark3_1_m_2_conv2_conv_post_act_fake_quantizer);  backbone_dark3_1_m_2_conv2_conv_post_act_fake_quantizer = None
    backbone_dark3_1_m_2_conv2_act_post_act_fake_quantizer = self.backbone_dark3_1_m_2_conv2_act_post_act_fake_quantizer(backbone_dark3_1_m_2_conv2_act);  backbone_dark3_1_m_2_conv2_act = None
    add_4 = backbone_dark3_1_m_2_conv2_act_post_act_fake_quantizer + add_3_post_act_fake_quantizer;  backbone_dark3_1_m_2_conv2_act_post_act_fake_quantizer = add_3_post_act_fake_quantizer = None
    add_4_post_act_fake_quantizer = self.add_4_post_act_fake_quantizer(add_4);  add_4 = None
    cat_2 = torch.cat((add_4_post_act_fake_quantizer, backbone_dark3_1_conv2_act_post_act_fake_quantizer), dim = 1);  add_4_post_act_fake_quantizer = backbone_dark3_1_conv2_act_post_act_fake_quantizer = None
    cat_2_post_act_fake_quantizer = self.cat_2_post_act_fake_quantizer(cat_2);  cat_2 = None
    backbone_dark3_1_conv3_conv = getattr(self.backbone.dark3, "1").conv3.conv(cat_2_post_act_fake_quantizer);  cat_2_post_act_fake_quantizer = None
    backbone_dark3_1_conv3_conv_post_act_fake_quantizer = self.backbone_dark3_1_conv3_conv_post_act_fake_quantizer(backbone_dark3_1_conv3_conv);  backbone_dark3_1_conv3_conv = None
    backbone_dark3_1_conv3_act = getattr(self.backbone.dark3, "1").conv3.act(backbone_dark3_1_conv3_conv_post_act_fake_quantizer);  backbone_dark3_1_conv3_conv_post_act_fake_quantizer = None
    backbone_dark3_1_conv3_act_post_act_fake_quantizer = self.backbone_dark3_1_conv3_act_post_act_fake_quantizer(backbone_dark3_1_conv3_act);  backbone_dark3_1_conv3_act = None
    backbone_dark4_0_conv = getattr(self.backbone.dark4, "0").conv(backbone_dark3_1_conv3_act_post_act_fake_quantizer)
    backbone_dark4_0_conv_post_act_fake_quantizer = self.backbone_dark4_0_conv_post_act_fake_quantizer(backbone_dark4_0_conv);  backbone_dark4_0_conv = None
    backbone_dark4_0_act = getattr(self.backbone.dark4, "0").act(backbone_dark4_0_conv_post_act_fake_quantizer);  backbone_dark4_0_conv_post_act_fake_quantizer = None
    backbone_dark4_0_act_post_act_fake_quantizer = self.backbone_dark4_0_act_post_act_fake_quantizer(backbone_dark4_0_act);  backbone_dark4_0_act = None
    backbone_dark4_1_conv1_conv = getattr(self.backbone.dark4, "1").conv1.conv(backbone_dark4_0_act_post_act_fake_quantizer)
    backbone_dark4_1_conv1_conv_post_act_fake_quantizer = self.backbone_dark4_1_conv1_conv_post_act_fake_quantizer(backbone_dark4_1_conv1_conv);  backbone_dark4_1_conv1_conv = None
    backbone_dark4_1_conv1_act = getattr(self.backbone.dark4, "1").conv1.act(backbone_dark4_1_conv1_conv_post_act_fake_quantizer);  backbone_dark4_1_conv1_conv_post_act_fake_quantizer = None
    backbone_dark4_1_conv1_act_post_act_fake_quantizer = self.backbone_dark4_1_conv1_act_post_act_fake_quantizer(backbone_dark4_1_conv1_act);  backbone_dark4_1_conv1_act = None
    backbone_dark4_1_conv2_conv = getattr(self.backbone.dark4, "1").conv2.conv(backbone_dark4_0_act_post_act_fake_quantizer);  backbone_dark4_0_act_post_act_fake_quantizer = None
    backbone_dark4_1_conv2_conv_post_act_fake_quantizer = self.backbone_dark4_1_conv2_conv_post_act_fake_quantizer(backbone_dark4_1_conv2_conv);  backbone_dark4_1_conv2_conv = None
    backbone_dark4_1_conv2_act = getattr(self.backbone.dark4, "1").conv2.act(backbone_dark4_1_conv2_conv_post_act_fake_quantizer);  backbone_dark4_1_conv2_conv_post_act_fake_quantizer = None
    backbone_dark4_1_conv2_act_post_act_fake_quantizer = self.backbone_dark4_1_conv2_act_post_act_fake_quantizer(backbone_dark4_1_conv2_act);  backbone_dark4_1_conv2_act = None
    backbone_dark4_1_m_0_conv1_conv = getattr(getattr(self.backbone.dark4, "1").m, "0").conv1.conv(backbone_dark4_1_conv1_act_post_act_fake_quantizer)
    backbone_dark4_1_m_0_conv1_conv_post_act_fake_quantizer = self.backbone_dark4_1_m_0_conv1_conv_post_act_fake_quantizer(backbone_dark4_1_m_0_conv1_conv);  backbone_dark4_1_m_0_conv1_conv = None
    backbone_dark4_1_m_0_conv1_act = getattr(getattr(self.backbone.dark4, "1").m, "0").conv1.act(backbone_dark4_1_m_0_conv1_conv_post_act_fake_quantizer);  backbone_dark4_1_m_0_conv1_conv_post_act_fake_quantizer = None
    backbone_dark4_1_m_0_conv1_act_post_act_fake_quantizer = self.backbone_dark4_1_m_0_conv1_act_post_act_fake_quantizer(backbone_dark4_1_m_0_conv1_act);  backbone_dark4_1_m_0_conv1_act = None
    backbone_dark4_1_m_0_conv2_conv = getattr(getattr(self.backbone.dark4, "1").m, "0").conv2.conv(backbone_dark4_1_m_0_conv1_act_post_act_fake_quantizer);  backbone_dark4_1_m_0_conv1_act_post_act_fake_quantizer = None
    backbone_dark4_1_m_0_conv2_conv_post_act_fake_quantizer = self.backbone_dark4_1_m_0_conv2_conv_post_act_fake_quantizer(backbone_dark4_1_m_0_conv2_conv);  backbone_dark4_1_m_0_conv2_conv = None
    backbone_dark4_1_m_0_conv2_act = getattr(getattr(self.backbone.dark4, "1").m, "0").conv2.act(backbone_dark4_1_m_0_conv2_conv_post_act_fake_quantizer);  backbone_dark4_1_m_0_conv2_conv_post_act_fake_quantizer = None
    backbone_dark4_1_m_0_conv2_act_post_act_fake_quantizer = self.backbone_dark4_1_m_0_conv2_act_post_act_fake_quantizer(backbone_dark4_1_m_0_conv2_act);  backbone_dark4_1_m_0_conv2_act = None
    add_5 = backbone_dark4_1_m_0_conv2_act_post_act_fake_quantizer + backbone_dark4_1_conv1_act_post_act_fake_quantizer;  backbone_dark4_1_m_0_conv2_act_post_act_fake_quantizer = backbone_dark4_1_conv1_act_post_act_fake_quantizer = None
    add_5_post_act_fake_quantizer = self.add_5_post_act_fake_quantizer(add_5);  add_5 = None
    backbone_dark4_1_m_1_conv1_conv = getattr(getattr(self.backbone.dark4, "1").m, "1").conv1.conv(add_5_post_act_fake_quantizer)
    backbone_dark4_1_m_1_conv1_conv_post_act_fake_quantizer = self.backbone_dark4_1_m_1_conv1_conv_post_act_fake_quantizer(backbone_dark4_1_m_1_conv1_conv);  backbone_dark4_1_m_1_conv1_conv = None
    backbone_dark4_1_m_1_conv1_act = getattr(getattr(self.backbone.dark4, "1").m, "1").conv1.act(backbone_dark4_1_m_1_conv1_conv_post_act_fake_quantizer);  backbone_dark4_1_m_1_conv1_conv_post_act_fake_quantizer = None
    backbone_dark4_1_m_1_conv1_act_post_act_fake_quantizer = self.backbone_dark4_1_m_1_conv1_act_post_act_fake_quantizer(backbone_dark4_1_m_1_conv1_act);  backbone_dark4_1_m_1_conv1_act = None
    backbone_dark4_1_m_1_conv2_conv = getattr(getattr(self.backbone.dark4, "1").m, "1").conv2.conv(backbone_dark4_1_m_1_conv1_act_post_act_fake_quantizer);  backbone_dark4_1_m_1_conv1_act_post_act_fake_quantizer = None
    backbone_dark4_1_m_1_conv2_conv_post_act_fake_quantizer = self.backbone_dark4_1_m_1_conv2_conv_post_act_fake_quantizer(backbone_dark4_1_m_1_conv2_conv);  backbone_dark4_1_m_1_conv2_conv = None
    backbone_dark4_1_m_1_conv2_act = getattr(getattr(self.backbone.dark4, "1").m, "1").conv2.act(backbone_dark4_1_m_1_conv2_conv_post_act_fake_quantizer);  backbone_dark4_1_m_1_conv2_conv_post_act_fake_quantizer = None
    backbone_dark4_1_m_1_conv2_act_post_act_fake_quantizer = self.backbone_dark4_1_m_1_conv2_act_post_act_fake_quantizer(backbone_dark4_1_m_1_conv2_act);  backbone_dark4_1_m_1_conv2_act = None
    add_6 = backbone_dark4_1_m_1_conv2_act_post_act_fake_quantizer + add_5_post_act_fake_quantizer;  backbone_dark4_1_m_1_conv2_act_post_act_fake_quantizer = add_5_post_act_fake_quantizer = None
    add_6_post_act_fake_quantizer = self.add_6_post_act_fake_quantizer(add_6);  add_6 = None
    backbone_dark4_1_m_2_conv1_conv = getattr(getattr(self.backbone.dark4, "1").m, "2").conv1.conv(add_6_post_act_fake_quantizer)
    backbone_dark4_1_m_2_conv1_conv_post_act_fake_quantizer = self.backbone_dark4_1_m_2_conv1_conv_post_act_fake_quantizer(backbone_dark4_1_m_2_conv1_conv);  backbone_dark4_1_m_2_conv1_conv = None
    backbone_dark4_1_m_2_conv1_act = getattr(getattr(self.backbone.dark4, "1").m, "2").conv1.act(backbone_dark4_1_m_2_conv1_conv_post_act_fake_quantizer);  backbone_dark4_1_m_2_conv1_conv_post_act_fake_quantizer = None
    backbone_dark4_1_m_2_conv1_act_post_act_fake_quantizer = self.backbone_dark4_1_m_2_conv1_act_post_act_fake_quantizer(backbone_dark4_1_m_2_conv1_act);  backbone_dark4_1_m_2_conv1_act = None
    backbone_dark4_1_m_2_conv2_conv = getattr(getattr(self.backbone.dark4, "1").m, "2").conv2.conv(backbone_dark4_1_m_2_conv1_act_post_act_fake_quantizer);  backbone_dark4_1_m_2_conv1_act_post_act_fake_quantizer = None
    backbone_dark4_1_m_2_conv2_conv_post_act_fake_quantizer = self.backbone_dark4_1_m_2_conv2_conv_post_act_fake_quantizer(backbone_dark4_1_m_2_conv2_conv);  backbone_dark4_1_m_2_conv2_conv = None
    backbone_dark4_1_m_2_conv2_act = getattr(getattr(self.backbone.dark4, "1").m, "2").conv2.act(backbone_dark4_1_m_2_conv2_conv_post_act_fake_quantizer);  backbone_dark4_1_m_2_conv2_conv_post_act_fake_quantizer = None
    backbone_dark4_1_m_2_conv2_act_post_act_fake_quantizer = self.backbone_dark4_1_m_2_conv2_act_post_act_fake_quantizer(backbone_dark4_1_m_2_conv2_act);  backbone_dark4_1_m_2_conv2_act = None
    add_7 = backbone_dark4_1_m_2_conv2_act_post_act_fake_quantizer + add_6_post_act_fake_quantizer;  backbone_dark4_1_m_2_conv2_act_post_act_fake_quantizer = add_6_post_act_fake_quantizer = None
    add_7_post_act_fake_quantizer = self.add_7_post_act_fake_quantizer(add_7);  add_7 = None
    cat_3 = torch.cat((add_7_post_act_fake_quantizer, backbone_dark4_1_conv2_act_post_act_fake_quantizer), dim = 1);  add_7_post_act_fake_quantizer = backbone_dark4_1_conv2_act_post_act_fake_quantizer = None
    cat_3_post_act_fake_quantizer = self.cat_3_post_act_fake_quantizer(cat_3);  cat_3 = None
    backbone_dark4_1_conv3_conv = getattr(self.backbone.dark4, "1").conv3.conv(cat_3_post_act_fake_quantizer);  cat_3_post_act_fake_quantizer = None
    backbone_dark4_1_conv3_conv_post_act_fake_quantizer = self.backbone_dark4_1_conv3_conv_post_act_fake_quantizer(backbone_dark4_1_conv3_conv);  backbone_dark4_1_conv3_conv = None
    backbone_dark4_1_conv3_act = getattr(self.backbone.dark4, "1").conv3.act(backbone_dark4_1_conv3_conv_post_act_fake_quantizer);  backbone_dark4_1_conv3_conv_post_act_fake_quantizer = None
    backbone_dark4_1_conv3_act_post_act_fake_quantizer = self.backbone_dark4_1_conv3_act_post_act_fake_quantizer(backbone_dark4_1_conv3_act);  backbone_dark4_1_conv3_act = None
    backbone_dark5_0_conv = getattr(self.backbone.dark5, "0").conv(backbone_dark4_1_conv3_act_post_act_fake_quantizer)
    backbone_dark5_0_conv_post_act_fake_quantizer = self.backbone_dark5_0_conv_post_act_fake_quantizer(backbone_dark5_0_conv);  backbone_dark5_0_conv = None
    backbone_dark5_0_act = getattr(self.backbone.dark5, "0").act(backbone_dark5_0_conv_post_act_fake_quantizer);  backbone_dark5_0_conv_post_act_fake_quantizer = None
    backbone_dark5_0_act_post_act_fake_quantizer = self.backbone_dark5_0_act_post_act_fake_quantizer(backbone_dark5_0_act);  backbone_dark5_0_act = None
    backbone_dark5_1_conv_block1_conv = getattr(self.backbone.dark5, "1").conv_block1.conv(backbone_dark5_0_act_post_act_fake_quantizer);  backbone_dark5_0_act_post_act_fake_quantizer = None
    backbone_dark5_1_conv_block1_conv_post_act_fake_quantizer = self.backbone_dark5_1_conv_block1_conv_post_act_fake_quantizer(backbone_dark5_1_conv_block1_conv);  backbone_dark5_1_conv_block1_conv = None
    backbone_dark5_1_conv_block1_act = getattr(self.backbone.dark5, "1").conv_block1.act(backbone_dark5_1_conv_block1_conv_post_act_fake_quantizer);  backbone_dark5_1_conv_block1_conv_post_act_fake_quantizer = None
    backbone_dark5_1_conv_block1_act_post_act_fake_quantizer = self.backbone_dark5_1_conv_block1_act_post_act_fake_quantizer(backbone_dark5_1_conv_block1_act);  backbone_dark5_1_conv_block1_act = None
    backbone_dark5_1_pooling_blocks_0 = getattr(getattr(self.backbone.dark5, "1").pooling_blocks, "0")(backbone_dark5_1_conv_block1_act_post_act_fake_quantizer)
    backbone_dark5_1_pooling_blocks_0_post_act_fake_quantizer = self.backbone_dark5_1_pooling_blocks_0_post_act_fake_quantizer(backbone_dark5_1_pooling_blocks_0);  backbone_dark5_1_pooling_blocks_0 = None
    backbone_dark5_1_pooling_blocks_1 = getattr(getattr(self.backbone.dark5, "1").pooling_blocks, "1")(backbone_dark5_1_conv_block1_act_post_act_fake_quantizer)
    backbone_dark5_1_pooling_blocks_1_post_act_fake_quantizer = self.backbone_dark5_1_pooling_blocks_1_post_act_fake_quantizer(backbone_dark5_1_pooling_blocks_1);  backbone_dark5_1_pooling_blocks_1 = None
    backbone_dark5_1_pooling_blocks_2 = getattr(getattr(self.backbone.dark5, "1").pooling_blocks, "2")(backbone_dark5_1_conv_block1_act_post_act_fake_quantizer)
    backbone_dark5_1_pooling_blocks_2_post_act_fake_quantizer = self.backbone_dark5_1_pooling_blocks_2_post_act_fake_quantizer(backbone_dark5_1_pooling_blocks_2);  backbone_dark5_1_pooling_blocks_2 = None
    cat_4 = torch.cat([backbone_dark5_1_conv_block1_act_post_act_fake_quantizer, backbone_dark5_1_pooling_blocks_0_post_act_fake_quantizer, backbone_dark5_1_pooling_blocks_1_post_act_fake_quantizer, backbone_dark5_1_pooling_blocks_2_post_act_fake_quantizer], 1);  backbone_dark5_1_conv_block1_act_post_act_fake_quantizer = backbone_dark5_1_pooling_blocks_0_post_act_fake_quantizer = backbone_dark5_1_pooling_blocks_1_post_act_fake_quantizer = backbone_dark5_1_pooling_blocks_2_post_act_fake_quantizer = None
    cat_4_post_act_fake_quantizer = self.cat_4_post_act_fake_quantizer(cat_4);  cat_4 = None
    backbone_dark5_1_conv_block2_conv = getattr(self.backbone.dark5, "1").conv_block2.conv(cat_4_post_act_fake_quantizer);  cat_4_post_act_fake_quantizer = None
    backbone_dark5_1_conv_block2_conv_post_act_fake_quantizer = self.backbone_dark5_1_conv_block2_conv_post_act_fake_quantizer(backbone_dark5_1_conv_block2_conv);  backbone_dark5_1_conv_block2_conv = None
    backbone_dark5_1_conv_block2_act = getattr(self.backbone.dark5, "1").conv_block2.act(backbone_dark5_1_conv_block2_conv_post_act_fake_quantizer);  backbone_dark5_1_conv_block2_conv_post_act_fake_quantizer = None
    backbone_dark5_1_conv_block2_act_post_act_fake_quantizer = self.backbone_dark5_1_conv_block2_act_post_act_fake_quantizer(backbone_dark5_1_conv_block2_act);  backbone_dark5_1_conv_block2_act = None
    backbone_dark5_2_conv1_conv = getattr(self.backbone.dark5, "2").conv1.conv(backbone_dark5_1_conv_block2_act_post_act_fake_quantizer)
    backbone_dark5_2_conv1_conv_post_act_fake_quantizer = self.backbone_dark5_2_conv1_conv_post_act_fake_quantizer(backbone_dark5_2_conv1_conv);  backbone_dark5_2_conv1_conv = None
    backbone_dark5_2_conv1_act = getattr(self.backbone.dark5, "2").conv1.act(backbone_dark5_2_conv1_conv_post_act_fake_quantizer);  backbone_dark5_2_conv1_conv_post_act_fake_quantizer = None
    backbone_dark5_2_conv1_act_post_act_fake_quantizer = self.backbone_dark5_2_conv1_act_post_act_fake_quantizer(backbone_dark5_2_conv1_act);  backbone_dark5_2_conv1_act = None
    backbone_dark5_2_conv2_conv = getattr(self.backbone.dark5, "2").conv2.conv(backbone_dark5_1_conv_block2_act_post_act_fake_quantizer);  backbone_dark5_1_conv_block2_act_post_act_fake_quantizer = None
    backbone_dark5_2_conv2_conv_post_act_fake_quantizer = self.backbone_dark5_2_conv2_conv_post_act_fake_quantizer(backbone_dark5_2_conv2_conv);  backbone_dark5_2_conv2_conv = None
    backbone_dark5_2_conv2_act = getattr(self.backbone.dark5, "2").conv2.act(backbone_dark5_2_conv2_conv_post_act_fake_quantizer);  backbone_dark5_2_conv2_conv_post_act_fake_quantizer = None
    backbone_dark5_2_conv2_act_post_act_fake_quantizer = self.backbone_dark5_2_conv2_act_post_act_fake_quantizer(backbone_dark5_2_conv2_act);  backbone_dark5_2_conv2_act = None
    backbone_dark5_2_m_0_conv1_conv = getattr(getattr(self.backbone.dark5, "2").m, "0").conv1.conv(backbone_dark5_2_conv1_act_post_act_fake_quantizer);  backbone_dark5_2_conv1_act_post_act_fake_quantizer = None
    backbone_dark5_2_m_0_conv1_conv_post_act_fake_quantizer = self.backbone_dark5_2_m_0_conv1_conv_post_act_fake_quantizer(backbone_dark5_2_m_0_conv1_conv);  backbone_dark5_2_m_0_conv1_conv = None
    backbone_dark5_2_m_0_conv1_act = getattr(getattr(self.backbone.dark5, "2").m, "0").conv1.act(backbone_dark5_2_m_0_conv1_conv_post_act_fake_quantizer);  backbone_dark5_2_m_0_conv1_conv_post_act_fake_quantizer = None
    backbone_dark5_2_m_0_conv1_act_post_act_fake_quantizer = self.backbone_dark5_2_m_0_conv1_act_post_act_fake_quantizer(backbone_dark5_2_m_0_conv1_act);  backbone_dark5_2_m_0_conv1_act = None
    backbone_dark5_2_m_0_conv2_conv = getattr(getattr(self.backbone.dark5, "2").m, "0").conv2.conv(backbone_dark5_2_m_0_conv1_act_post_act_fake_quantizer);  backbone_dark5_2_m_0_conv1_act_post_act_fake_quantizer = None
    backbone_dark5_2_m_0_conv2_conv_post_act_fake_quantizer = self.backbone_dark5_2_m_0_conv2_conv_post_act_fake_quantizer(backbone_dark5_2_m_0_conv2_conv);  backbone_dark5_2_m_0_conv2_conv = None
    backbone_dark5_2_m_0_conv2_act = getattr(getattr(self.backbone.dark5, "2").m, "0").conv2.act(backbone_dark5_2_m_0_conv2_conv_post_act_fake_quantizer);  backbone_dark5_2_m_0_conv2_conv_post_act_fake_quantizer = None
    backbone_dark5_2_m_0_conv2_act_post_act_fake_quantizer = self.backbone_dark5_2_m_0_conv2_act_post_act_fake_quantizer(backbone_dark5_2_m_0_conv2_act);  backbone_dark5_2_m_0_conv2_act = None
    cat_5 = torch.cat((backbone_dark5_2_m_0_conv2_act_post_act_fake_quantizer, backbone_dark5_2_conv2_act_post_act_fake_quantizer), dim = 1);  backbone_dark5_2_m_0_conv2_act_post_act_fake_quantizer = backbone_dark5_2_conv2_act_post_act_fake_quantizer = None
    cat_5_post_act_fake_quantizer = self.cat_5_post_act_fake_quantizer(cat_5);  cat_5 = None
    backbone_dark5_2_conv3_conv = getattr(self.backbone.dark5, "2").conv3.conv(cat_5_post_act_fake_quantizer);  cat_5_post_act_fake_quantizer = None
    backbone_dark5_2_conv3_conv_post_act_fake_quantizer = self.backbone_dark5_2_conv3_conv_post_act_fake_quantizer(backbone_dark5_2_conv3_conv);  backbone_dark5_2_conv3_conv = None
    backbone_dark5_2_conv3_act = getattr(self.backbone.dark5, "2").conv3.act(backbone_dark5_2_conv3_conv_post_act_fake_quantizer);  backbone_dark5_2_conv3_conv_post_act_fake_quantizer = None
    _tensor_constant0 = self._tensor_constant0
    update = input_1_post_act_fake_quantizer.update({'features': (backbone_dark3_1_conv3_act_post_act_fake_quantizer, backbone_dark4_1_conv3_act_post_act_fake_quantizer, backbone_dark5_2_conv3_act), 'strides': _tensor_constant0});  backbone_dark3_1_conv3_act_post_act_fake_quantizer = backbone_dark4_1_conv3_act_post_act_fake_quantizer = backbone_dark5_2_conv3_act = _tensor_constant0 = None
    getitem_1 = input_1_post_act_fake_quantizer['features']
    getitem_2 = getitem_1[0]
    getitem_3 = getitem_1[1]
    getitem_4 = getitem_1[2];  getitem_1 = None
    getitem_4_post_act_fake_quantizer = self.getitem_4_post_act_fake_quantizer(getitem_4);  getitem_4 = None
    neck_lateral_conv0_conv = self.neck.lateral_conv0.conv(getitem_4_post_act_fake_quantizer);  getitem_4_post_act_fake_quantizer = None
    neck_lateral_conv0_conv_post_act_fake_quantizer = self.neck_lateral_conv0_conv_post_act_fake_quantizer(neck_lateral_conv0_conv);  neck_lateral_conv0_conv = None
    neck_lateral_conv0_act = self.neck.lateral_conv0.act(neck_lateral_conv0_conv_post_act_fake_quantizer);  neck_lateral_conv0_conv_post_act_fake_quantizer = None
    neck_lateral_conv0_act_post_act_fake_quantizer = self.neck_lateral_conv0_act_post_act_fake_quantizer(neck_lateral_conv0_act);  neck_lateral_conv0_act = None
    getattr_1 = getitem_3.shape
    getitem_5 = getattr_1[slice(-2, None, None)];  getattr_1 = None
    interpolate_1 = torch.nn.functional.interpolate(neck_lateral_conv0_act_post_act_fake_quantizer, size = getitem_5, scale_factor = None, mode = 'nearest', align_corners = None, recompute_scale_factor = None);  getitem_5 = None
    interpolate_1_post_act_fake_quantizer = self.interpolate_1_post_act_fake_quantizer(interpolate_1);  interpolate_1 = None
    cat_6 = torch.cat([interpolate_1_post_act_fake_quantizer, getitem_3], 1);  interpolate_1_post_act_fake_quantizer = getitem_3 = None
    cat_6_post_act_fake_quantizer = self.cat_6_post_act_fake_quantizer(cat_6);  cat_6 = None
    neck_c3_p4_conv1_conv = self.neck.C3_p4.conv1.conv(cat_6_post_act_fake_quantizer)
    neck_c3_p4_conv1_conv_post_act_fake_quantizer = self.neck_c3_p4_conv1_conv_post_act_fake_quantizer(neck_c3_p4_conv1_conv);  neck_c3_p4_conv1_conv = None
    neck_c3_p4_conv1_act = self.neck.C3_p4.conv1.act(neck_c3_p4_conv1_conv_post_act_fake_quantizer);  neck_c3_p4_conv1_conv_post_act_fake_quantizer = None
    neck_c3_p4_conv1_act_post_act_fake_quantizer = self.neck_c3_p4_conv1_act_post_act_fake_quantizer(neck_c3_p4_conv1_act);  neck_c3_p4_conv1_act = None
    neck_c3_p4_conv2_conv = self.neck.C3_p4.conv2.conv(cat_6_post_act_fake_quantizer);  cat_6_post_act_fake_quantizer = None
    neck_c3_p4_conv2_conv_post_act_fake_quantizer = self.neck_c3_p4_conv2_conv_post_act_fake_quantizer(neck_c3_p4_conv2_conv);  neck_c3_p4_conv2_conv = None
    neck_c3_p4_conv2_act = self.neck.C3_p4.conv2.act(neck_c3_p4_conv2_conv_post_act_fake_quantizer);  neck_c3_p4_conv2_conv_post_act_fake_quantizer = None
    neck_c3_p4_conv2_act_post_act_fake_quantizer = self.neck_c3_p4_conv2_act_post_act_fake_quantizer(neck_c3_p4_conv2_act);  neck_c3_p4_conv2_act = None
    neck_c3_p4_m_0_conv1_conv = getattr(self.neck.C3_p4.m, "0").conv1.conv(neck_c3_p4_conv1_act_post_act_fake_quantizer)
    neck_c3_p4_m_0_conv1_conv_post_act_fake_quantizer = self.neck_c3_p4_m_0_conv1_conv_post_act_fake_quantizer(neck_c3_p4_m_0_conv1_conv);  neck_c3_p4_m_0_conv1_conv = None
    neck_c3_p4_m_0_conv1_act = getattr(self.neck.C3_p4.m, "0").conv1.act(neck_c3_p4_m_0_conv1_conv_post_act_fake_quantizer);  neck_c3_p4_m_0_conv1_conv_post_act_fake_quantizer = None
    neck_c3_p4_m_0_conv1_act_post_act_fake_quantizer = self.neck_c3_p4_m_0_conv1_act_post_act_fake_quantizer(neck_c3_p4_m_0_conv1_act);  neck_c3_p4_m_0_conv1_act = None
    neck_c3_p4_m_0_conv2_conv = getattr(self.neck.C3_p4.m, "0").conv2.conv(neck_c3_p4_m_0_conv1_act_post_act_fake_quantizer);  neck_c3_p4_m_0_conv1_act_post_act_fake_quantizer = None
    neck_c3_p4_m_0_conv2_conv_post_act_fake_quantizer = self.neck_c3_p4_m_0_conv2_conv_post_act_fake_quantizer(neck_c3_p4_m_0_conv2_conv);  neck_c3_p4_m_0_conv2_conv = None
    neck_c3_p4_m_0_conv2_act = getattr(self.neck.C3_p4.m, "0").conv2.act(neck_c3_p4_m_0_conv2_conv_post_act_fake_quantizer);  neck_c3_p4_m_0_conv2_conv_post_act_fake_quantizer = None
    neck_c3_p4_m_0_conv2_act_post_act_fake_quantizer = self.neck_c3_p4_m_0_conv2_act_post_act_fake_quantizer(neck_c3_p4_m_0_conv2_act);  neck_c3_p4_m_0_conv2_act = None
    add_8 = neck_c3_p4_m_0_conv2_act_post_act_fake_quantizer + neck_c3_p4_conv1_act_post_act_fake_quantizer;  neck_c3_p4_m_0_conv2_act_post_act_fake_quantizer = neck_c3_p4_conv1_act_post_act_fake_quantizer = None
    add_8_post_act_fake_quantizer = self.add_8_post_act_fake_quantizer(add_8);  add_8 = None
    cat_7 = torch.cat((add_8_post_act_fake_quantizer, neck_c3_p4_conv2_act_post_act_fake_quantizer), dim = 1);  add_8_post_act_fake_quantizer = neck_c3_p4_conv2_act_post_act_fake_quantizer = None
    cat_7_post_act_fake_quantizer = self.cat_7_post_act_fake_quantizer(cat_7);  cat_7 = None
    neck_c3_p4_conv3_conv = self.neck.C3_p4.conv3.conv(cat_7_post_act_fake_quantizer);  cat_7_post_act_fake_quantizer = None
    neck_c3_p4_conv3_conv_post_act_fake_quantizer = self.neck_c3_p4_conv3_conv_post_act_fake_quantizer(neck_c3_p4_conv3_conv);  neck_c3_p4_conv3_conv = None
    neck_c3_p4_conv3_act = self.neck.C3_p4.conv3.act(neck_c3_p4_conv3_conv_post_act_fake_quantizer);  neck_c3_p4_conv3_conv_post_act_fake_quantizer = None
    neck_c3_p4_conv3_act_post_act_fake_quantizer = self.neck_c3_p4_conv3_act_post_act_fake_quantizer(neck_c3_p4_conv3_act);  neck_c3_p4_conv3_act = None
    neck_reduce_conv1_conv = self.neck.reduce_conv1.conv(neck_c3_p4_conv3_act_post_act_fake_quantizer);  neck_c3_p4_conv3_act_post_act_fake_quantizer = None
    neck_reduce_conv1_conv_post_act_fake_quantizer = self.neck_reduce_conv1_conv_post_act_fake_quantizer(neck_reduce_conv1_conv);  neck_reduce_conv1_conv = None
    neck_reduce_conv1_act = self.neck.reduce_conv1.act(neck_reduce_conv1_conv_post_act_fake_quantizer);  neck_reduce_conv1_conv_post_act_fake_quantizer = None
    neck_reduce_conv1_act_post_act_fake_quantizer = self.neck_reduce_conv1_act_post_act_fake_quantizer(neck_reduce_conv1_act);  neck_reduce_conv1_act = None
    getattr_2 = getitem_2.shape
    getitem_6 = getattr_2[slice(-2, None, None)];  getattr_2 = None
    interpolate_2 = torch.nn.functional.interpolate(neck_reduce_conv1_act_post_act_fake_quantizer, size = getitem_6, scale_factor = None, mode = 'nearest', align_corners = None, recompute_scale_factor = None);  getitem_6 = None
    interpolate_2_post_act_fake_quantizer = self.interpolate_2_post_act_fake_quantizer(interpolate_2);  interpolate_2 = None
    cat_8 = torch.cat([interpolate_2_post_act_fake_quantizer, getitem_2], 1);  interpolate_2_post_act_fake_quantizer = getitem_2 = None
    cat_8_post_act_fake_quantizer = self.cat_8_post_act_fake_quantizer(cat_8);  cat_8 = None
    neck_c3_p3_conv1_conv = self.neck.C3_p3.conv1.conv(cat_8_post_act_fake_quantizer)
    neck_c3_p3_conv1_conv_post_act_fake_quantizer = self.neck_c3_p3_conv1_conv_post_act_fake_quantizer(neck_c3_p3_conv1_conv);  neck_c3_p3_conv1_conv = None
    neck_c3_p3_conv1_act = self.neck.C3_p3.conv1.act(neck_c3_p3_conv1_conv_post_act_fake_quantizer);  neck_c3_p3_conv1_conv_post_act_fake_quantizer = None
    neck_c3_p3_conv1_act_post_act_fake_quantizer = self.neck_c3_p3_conv1_act_post_act_fake_quantizer(neck_c3_p3_conv1_act);  neck_c3_p3_conv1_act = None
    neck_c3_p3_conv2_conv = self.neck.C3_p3.conv2.conv(cat_8_post_act_fake_quantizer);  cat_8_post_act_fake_quantizer = None
    neck_c3_p3_conv2_conv_post_act_fake_quantizer = self.neck_c3_p3_conv2_conv_post_act_fake_quantizer(neck_c3_p3_conv2_conv);  neck_c3_p3_conv2_conv = None
    neck_c3_p3_conv2_act = self.neck.C3_p3.conv2.act(neck_c3_p3_conv2_conv_post_act_fake_quantizer);  neck_c3_p3_conv2_conv_post_act_fake_quantizer = None
    neck_c3_p3_conv2_act_post_act_fake_quantizer = self.neck_c3_p3_conv2_act_post_act_fake_quantizer(neck_c3_p3_conv2_act);  neck_c3_p3_conv2_act = None
    neck_c3_p3_m_0_conv1_conv = getattr(self.neck.C3_p3.m, "0").conv1.conv(neck_c3_p3_conv1_act_post_act_fake_quantizer)
    neck_c3_p3_m_0_conv1_conv_post_act_fake_quantizer = self.neck_c3_p3_m_0_conv1_conv_post_act_fake_quantizer(neck_c3_p3_m_0_conv1_conv);  neck_c3_p3_m_0_conv1_conv = None
    neck_c3_p3_m_0_conv1_act = getattr(self.neck.C3_p3.m, "0").conv1.act(neck_c3_p3_m_0_conv1_conv_post_act_fake_quantizer);  neck_c3_p3_m_0_conv1_conv_post_act_fake_quantizer = None
    neck_c3_p3_m_0_conv1_act_post_act_fake_quantizer = self.neck_c3_p3_m_0_conv1_act_post_act_fake_quantizer(neck_c3_p3_m_0_conv1_act);  neck_c3_p3_m_0_conv1_act = None
    neck_c3_p3_m_0_conv2_conv = getattr(self.neck.C3_p3.m, "0").conv2.conv(neck_c3_p3_m_0_conv1_act_post_act_fake_quantizer);  neck_c3_p3_m_0_conv1_act_post_act_fake_quantizer = None
    neck_c3_p3_m_0_conv2_conv_post_act_fake_quantizer = self.neck_c3_p3_m_0_conv2_conv_post_act_fake_quantizer(neck_c3_p3_m_0_conv2_conv);  neck_c3_p3_m_0_conv2_conv = None
    neck_c3_p3_m_0_conv2_act = getattr(self.neck.C3_p3.m, "0").conv2.act(neck_c3_p3_m_0_conv2_conv_post_act_fake_quantizer);  neck_c3_p3_m_0_conv2_conv_post_act_fake_quantizer = None
    neck_c3_p3_m_0_conv2_act_post_act_fake_quantizer = self.neck_c3_p3_m_0_conv2_act_post_act_fake_quantizer(neck_c3_p3_m_0_conv2_act);  neck_c3_p3_m_0_conv2_act = None
    add_9 = neck_c3_p3_m_0_conv2_act_post_act_fake_quantizer + neck_c3_p3_conv1_act_post_act_fake_quantizer;  neck_c3_p3_m_0_conv2_act_post_act_fake_quantizer = neck_c3_p3_conv1_act_post_act_fake_quantizer = None
    add_9_post_act_fake_quantizer = self.add_9_post_act_fake_quantizer(add_9);  add_9 = None
    cat_9 = torch.cat((add_9_post_act_fake_quantizer, neck_c3_p3_conv2_act_post_act_fake_quantizer), dim = 1);  add_9_post_act_fake_quantizer = neck_c3_p3_conv2_act_post_act_fake_quantizer = None
    cat_9_post_act_fake_quantizer = self.cat_9_post_act_fake_quantizer(cat_9);  cat_9 = None
    neck_c3_p3_conv3_conv = self.neck.C3_p3.conv3.conv(cat_9_post_act_fake_quantizer);  cat_9_post_act_fake_quantizer = None
    neck_c3_p3_conv3_conv_post_act_fake_quantizer = self.neck_c3_p3_conv3_conv_post_act_fake_quantizer(neck_c3_p3_conv3_conv);  neck_c3_p3_conv3_conv = None
    neck_c3_p3_conv3_act = self.neck.C3_p3.conv3.act(neck_c3_p3_conv3_conv_post_act_fake_quantizer);  neck_c3_p3_conv3_conv_post_act_fake_quantizer = None
    neck_c3_p3_conv3_act_post_act_fake_quantizer = self.neck_c3_p3_conv3_act_post_act_fake_quantizer(neck_c3_p3_conv3_act);  neck_c3_p3_conv3_act = None
    neck_bu_conv2_conv = self.neck.bu_conv2.conv(neck_c3_p3_conv3_act_post_act_fake_quantizer)
    neck_bu_conv2_conv_post_act_fake_quantizer = self.neck_bu_conv2_conv_post_act_fake_quantizer(neck_bu_conv2_conv);  neck_bu_conv2_conv = None
    neck_bu_conv2_act = self.neck.bu_conv2.act(neck_bu_conv2_conv_post_act_fake_quantizer);  neck_bu_conv2_conv_post_act_fake_quantizer = None
    cat_10 = torch.cat([neck_bu_conv2_act, neck_reduce_conv1_act_post_act_fake_quantizer], 1);  neck_bu_conv2_act = neck_reduce_conv1_act_post_act_fake_quantizer = None
    cat_10_post_act_fake_quantizer = self.cat_10_post_act_fake_quantizer(cat_10);  cat_10 = None
    neck_c3_n3_conv1_conv = self.neck.C3_n3.conv1.conv(cat_10_post_act_fake_quantizer)
    neck_c3_n3_conv1_conv_post_act_fake_quantizer = self.neck_c3_n3_conv1_conv_post_act_fake_quantizer(neck_c3_n3_conv1_conv);  neck_c3_n3_conv1_conv = None
    neck_c3_n3_conv1_act = self.neck.C3_n3.conv1.act(neck_c3_n3_conv1_conv_post_act_fake_quantizer);  neck_c3_n3_conv1_conv_post_act_fake_quantizer = None
    neck_c3_n3_conv1_act_post_act_fake_quantizer = self.neck_c3_n3_conv1_act_post_act_fake_quantizer(neck_c3_n3_conv1_act);  neck_c3_n3_conv1_act = None
    neck_c3_n3_conv2_conv = self.neck.C3_n3.conv2.conv(cat_10_post_act_fake_quantizer);  cat_10_post_act_fake_quantizer = None
    neck_c3_n3_conv2_conv_post_act_fake_quantizer = self.neck_c3_n3_conv2_conv_post_act_fake_quantizer(neck_c3_n3_conv2_conv);  neck_c3_n3_conv2_conv = None
    neck_c3_n3_conv2_act = self.neck.C3_n3.conv2.act(neck_c3_n3_conv2_conv_post_act_fake_quantizer);  neck_c3_n3_conv2_conv_post_act_fake_quantizer = None
    neck_c3_n3_conv2_act_post_act_fake_quantizer = self.neck_c3_n3_conv2_act_post_act_fake_quantizer(neck_c3_n3_conv2_act);  neck_c3_n3_conv2_act = None
    neck_c3_n3_m_0_conv1_conv = getattr(self.neck.C3_n3.m, "0").conv1.conv(neck_c3_n3_conv1_act_post_act_fake_quantizer)
    neck_c3_n3_m_0_conv1_conv_post_act_fake_quantizer = self.neck_c3_n3_m_0_conv1_conv_post_act_fake_quantizer(neck_c3_n3_m_0_conv1_conv);  neck_c3_n3_m_0_conv1_conv = None
    neck_c3_n3_m_0_conv1_act = getattr(self.neck.C3_n3.m, "0").conv1.act(neck_c3_n3_m_0_conv1_conv_post_act_fake_quantizer);  neck_c3_n3_m_0_conv1_conv_post_act_fake_quantizer = None
    neck_c3_n3_m_0_conv1_act_post_act_fake_quantizer = self.neck_c3_n3_m_0_conv1_act_post_act_fake_quantizer(neck_c3_n3_m_0_conv1_act);  neck_c3_n3_m_0_conv1_act = None
    neck_c3_n3_m_0_conv2_conv = getattr(self.neck.C3_n3.m, "0").conv2.conv(neck_c3_n3_m_0_conv1_act_post_act_fake_quantizer);  neck_c3_n3_m_0_conv1_act_post_act_fake_quantizer = None
    neck_c3_n3_m_0_conv2_conv_post_act_fake_quantizer = self.neck_c3_n3_m_0_conv2_conv_post_act_fake_quantizer(neck_c3_n3_m_0_conv2_conv);  neck_c3_n3_m_0_conv2_conv = None
    neck_c3_n3_m_0_conv2_act = getattr(self.neck.C3_n3.m, "0").conv2.act(neck_c3_n3_m_0_conv2_conv_post_act_fake_quantizer);  neck_c3_n3_m_0_conv2_conv_post_act_fake_quantizer = None
    neck_c3_n3_m_0_conv2_act_post_act_fake_quantizer = self.neck_c3_n3_m_0_conv2_act_post_act_fake_quantizer(neck_c3_n3_m_0_conv2_act);  neck_c3_n3_m_0_conv2_act = None
    add_10 = neck_c3_n3_m_0_conv2_act_post_act_fake_quantizer + neck_c3_n3_conv1_act_post_act_fake_quantizer;  neck_c3_n3_m_0_conv2_act_post_act_fake_quantizer = neck_c3_n3_conv1_act_post_act_fake_quantizer = None
    add_10_post_act_fake_quantizer = self.add_10_post_act_fake_quantizer(add_10);  add_10 = None
    cat_11 = torch.cat((add_10_post_act_fake_quantizer, neck_c3_n3_conv2_act_post_act_fake_quantizer), dim = 1);  add_10_post_act_fake_quantizer = neck_c3_n3_conv2_act_post_act_fake_quantizer = None
    cat_11_post_act_fake_quantizer = self.cat_11_post_act_fake_quantizer(cat_11);  cat_11 = None
    neck_c3_n3_conv3_conv = self.neck.C3_n3.conv3.conv(cat_11_post_act_fake_quantizer);  cat_11_post_act_fake_quantizer = None
    neck_c3_n3_conv3_conv_post_act_fake_quantizer = self.neck_c3_n3_conv3_conv_post_act_fake_quantizer(neck_c3_n3_conv3_conv);  neck_c3_n3_conv3_conv = None
    neck_c3_n3_conv3_act = self.neck.C3_n3.conv3.act(neck_c3_n3_conv3_conv_post_act_fake_quantizer);  neck_c3_n3_conv3_conv_post_act_fake_quantizer = None
    neck_c3_n3_conv3_act_post_act_fake_quantizer = self.neck_c3_n3_conv3_act_post_act_fake_quantizer(neck_c3_n3_conv3_act);  neck_c3_n3_conv3_act = None
    neck_bu_conv1_conv = self.neck.bu_conv1.conv(neck_c3_n3_conv3_act_post_act_fake_quantizer)
    neck_bu_conv1_conv_post_act_fake_quantizer = self.neck_bu_conv1_conv_post_act_fake_quantizer(neck_bu_conv1_conv);  neck_bu_conv1_conv = None
    neck_bu_conv1_act = self.neck.bu_conv1.act(neck_bu_conv1_conv_post_act_fake_quantizer);  neck_bu_conv1_conv_post_act_fake_quantizer = None
    cat_12 = torch.cat([neck_bu_conv1_act, neck_lateral_conv0_act_post_act_fake_quantizer], 1);  neck_bu_conv1_act = neck_lateral_conv0_act_post_act_fake_quantizer = None
    cat_12_post_act_fake_quantizer = self.cat_12_post_act_fake_quantizer(cat_12);  cat_12 = None
    neck_c3_n4_conv1_conv = self.neck.C3_n4.conv1.conv(cat_12_post_act_fake_quantizer)
    neck_c3_n4_conv1_conv_post_act_fake_quantizer = self.neck_c3_n4_conv1_conv_post_act_fake_quantizer(neck_c3_n4_conv1_conv);  neck_c3_n4_conv1_conv = None
    neck_c3_n4_conv1_act = self.neck.C3_n4.conv1.act(neck_c3_n4_conv1_conv_post_act_fake_quantizer);  neck_c3_n4_conv1_conv_post_act_fake_quantizer = None
    neck_c3_n4_conv1_act_post_act_fake_quantizer = self.neck_c3_n4_conv1_act_post_act_fake_quantizer(neck_c3_n4_conv1_act);  neck_c3_n4_conv1_act = None
    neck_c3_n4_conv2_conv = self.neck.C3_n4.conv2.conv(cat_12_post_act_fake_quantizer);  cat_12_post_act_fake_quantizer = None
    neck_c3_n4_conv2_conv_post_act_fake_quantizer = self.neck_c3_n4_conv2_conv_post_act_fake_quantizer(neck_c3_n4_conv2_conv);  neck_c3_n4_conv2_conv = None
    neck_c3_n4_conv2_act = self.neck.C3_n4.conv2.act(neck_c3_n4_conv2_conv_post_act_fake_quantizer);  neck_c3_n4_conv2_conv_post_act_fake_quantizer = None
    neck_c3_n4_conv2_act_post_act_fake_quantizer = self.neck_c3_n4_conv2_act_post_act_fake_quantizer(neck_c3_n4_conv2_act);  neck_c3_n4_conv2_act = None
    neck_c3_n4_m_0_conv1_conv = getattr(self.neck.C3_n4.m, "0").conv1.conv(neck_c3_n4_conv1_act_post_act_fake_quantizer)
    neck_c3_n4_m_0_conv1_conv_post_act_fake_quantizer = self.neck_c3_n4_m_0_conv1_conv_post_act_fake_quantizer(neck_c3_n4_m_0_conv1_conv);  neck_c3_n4_m_0_conv1_conv = None
    neck_c3_n4_m_0_conv1_act = getattr(self.neck.C3_n4.m, "0").conv1.act(neck_c3_n4_m_0_conv1_conv_post_act_fake_quantizer);  neck_c3_n4_m_0_conv1_conv_post_act_fake_quantizer = None
    neck_c3_n4_m_0_conv1_act_post_act_fake_quantizer = self.neck_c3_n4_m_0_conv1_act_post_act_fake_quantizer(neck_c3_n4_m_0_conv1_act);  neck_c3_n4_m_0_conv1_act = None
    neck_c3_n4_m_0_conv2_conv = getattr(self.neck.C3_n4.m, "0").conv2.conv(neck_c3_n4_m_0_conv1_act_post_act_fake_quantizer);  neck_c3_n4_m_0_conv1_act_post_act_fake_quantizer = None
    neck_c3_n4_m_0_conv2_conv_post_act_fake_quantizer = self.neck_c3_n4_m_0_conv2_conv_post_act_fake_quantizer(neck_c3_n4_m_0_conv2_conv);  neck_c3_n4_m_0_conv2_conv = None
    neck_c3_n4_m_0_conv2_act = getattr(self.neck.C3_n4.m, "0").conv2.act(neck_c3_n4_m_0_conv2_conv_post_act_fake_quantizer);  neck_c3_n4_m_0_conv2_conv_post_act_fake_quantizer = None
    neck_c3_n4_m_0_conv2_act_post_act_fake_quantizer = self.neck_c3_n4_m_0_conv2_act_post_act_fake_quantizer(neck_c3_n4_m_0_conv2_act);  neck_c3_n4_m_0_conv2_act = None
    add_11 = neck_c3_n4_m_0_conv2_act_post_act_fake_quantizer + neck_c3_n4_conv1_act_post_act_fake_quantizer;  neck_c3_n4_m_0_conv2_act_post_act_fake_quantizer = neck_c3_n4_conv1_act_post_act_fake_quantizer = None
    add_11_post_act_fake_quantizer = self.add_11_post_act_fake_quantizer(add_11);  add_11 = None
    cat_13 = torch.cat((add_11_post_act_fake_quantizer, neck_c3_n4_conv2_act_post_act_fake_quantizer), dim = 1);  add_11_post_act_fake_quantizer = neck_c3_n4_conv2_act_post_act_fake_quantizer = None
    cat_13_post_act_fake_quantizer = self.cat_13_post_act_fake_quantizer(cat_13);  cat_13 = None
    neck_c3_n4_conv3_conv = self.neck.C3_n4.conv3.conv(cat_13_post_act_fake_quantizer);  cat_13_post_act_fake_quantizer = None
    neck_c3_n4_conv3_conv_post_act_fake_quantizer = self.neck_c3_n4_conv3_conv_post_act_fake_quantizer(neck_c3_n4_conv3_conv);  neck_c3_n4_conv3_conv = None
    neck_c3_n4_conv3_act = self.neck.C3_n4.conv3.act(neck_c3_n4_conv3_conv_post_act_fake_quantizer);  neck_c3_n4_conv3_conv_post_act_fake_quantizer = None
    _tensor_constant1 = self._tensor_constant1
    update_1 = input_1_post_act_fake_quantizer.update({'features': (neck_c3_p3_conv3_act_post_act_fake_quantizer, neck_c3_n3_conv3_act_post_act_fake_quantizer, neck_c3_n4_conv3_act), 'strides': _tensor_constant1});  neck_c3_p3_conv3_act_post_act_fake_quantizer = neck_c3_n3_conv3_act_post_act_fake_quantizer = neck_c3_n4_conv3_act = _tensor_constant1 = None
    getitem_7 = input_1_post_act_fake_quantizer['features']
    getitem_8 = getitem_7[0]
    getitem_8_post_act_fake_quantizer = self.getitem_8_post_act_fake_quantizer(getitem_8);  getitem_8 = None
    roi_head_stems_0_conv = getattr(self.roi_head.stems, "0").conv(getitem_8_post_act_fake_quantizer);  getitem_8_post_act_fake_quantizer = None
    roi_head_stems_0_conv_post_act_fake_quantizer = self.roi_head_stems_0_conv_post_act_fake_quantizer(roi_head_stems_0_conv);  roi_head_stems_0_conv = None
    roi_head_stems_0_act = getattr(self.roi_head.stems, "0").act(roi_head_stems_0_conv_post_act_fake_quantizer);  roi_head_stems_0_conv_post_act_fake_quantizer = None
    roi_head_stems_0_act_post_act_fake_quantizer = self.roi_head_stems_0_act_post_act_fake_quantizer(roi_head_stems_0_act);  roi_head_stems_0_act = None
    roi_head_cls_convs_0_0_conv = getattr(getattr(self.roi_head.cls_convs, "0"), "0").conv(roi_head_stems_0_act_post_act_fake_quantizer)
    roi_head_cls_convs_0_0_conv_post_act_fake_quantizer = self.roi_head_cls_convs_0_0_conv_post_act_fake_quantizer(roi_head_cls_convs_0_0_conv);  roi_head_cls_convs_0_0_conv = None
    roi_head_cls_convs_0_0_act = getattr(getattr(self.roi_head.cls_convs, "0"), "0").act(roi_head_cls_convs_0_0_conv_post_act_fake_quantizer);  roi_head_cls_convs_0_0_conv_post_act_fake_quantizer = None
    roi_head_cls_convs_0_0_act_post_act_fake_quantizer = self.roi_head_cls_convs_0_0_act_post_act_fake_quantizer(roi_head_cls_convs_0_0_act);  roi_head_cls_convs_0_0_act = None
    roi_head_cls_convs_0_1_conv = getattr(getattr(self.roi_head.cls_convs, "0"), "1").conv(roi_head_cls_convs_0_0_act_post_act_fake_quantizer);  roi_head_cls_convs_0_0_act_post_act_fake_quantizer = None
    roi_head_cls_convs_0_1_conv_post_act_fake_quantizer = self.roi_head_cls_convs_0_1_conv_post_act_fake_quantizer(roi_head_cls_convs_0_1_conv);  roi_head_cls_convs_0_1_conv = None
    roi_head_cls_convs_0_1_act = getattr(getattr(self.roi_head.cls_convs, "0"), "1").act(roi_head_cls_convs_0_1_conv_post_act_fake_quantizer);  roi_head_cls_convs_0_1_conv_post_act_fake_quantizer = None
    roi_head_cls_convs_0_1_act_post_act_fake_quantizer = self.roi_head_cls_convs_0_1_act_post_act_fake_quantizer(roi_head_cls_convs_0_1_act);  roi_head_cls_convs_0_1_act = None
    roi_head_reg_convs_0_0_conv = getattr(getattr(self.roi_head.reg_convs, "0"), "0").conv(roi_head_stems_0_act_post_act_fake_quantizer);  roi_head_stems_0_act_post_act_fake_quantizer = None
    roi_head_reg_convs_0_0_conv_post_act_fake_quantizer = self.roi_head_reg_convs_0_0_conv_post_act_fake_quantizer(roi_head_reg_convs_0_0_conv);  roi_head_reg_convs_0_0_conv = None
    roi_head_reg_convs_0_0_act = getattr(getattr(self.roi_head.reg_convs, "0"), "0").act(roi_head_reg_convs_0_0_conv_post_act_fake_quantizer);  roi_head_reg_convs_0_0_conv_post_act_fake_quantizer = None
    roi_head_reg_convs_0_0_act_post_act_fake_quantizer = self.roi_head_reg_convs_0_0_act_post_act_fake_quantizer(roi_head_reg_convs_0_0_act);  roi_head_reg_convs_0_0_act = None
    roi_head_reg_convs_0_1_conv = getattr(getattr(self.roi_head.reg_convs, "0"), "1").conv(roi_head_reg_convs_0_0_act_post_act_fake_quantizer);  roi_head_reg_convs_0_0_act_post_act_fake_quantizer = None
    roi_head_reg_convs_0_1_conv_post_act_fake_quantizer = self.roi_head_reg_convs_0_1_conv_post_act_fake_quantizer(roi_head_reg_convs_0_1_conv);  roi_head_reg_convs_0_1_conv = None
    roi_head_reg_convs_0_1_act = getattr(getattr(self.roi_head.reg_convs, "0"), "1").act(roi_head_reg_convs_0_1_conv_post_act_fake_quantizer);  roi_head_reg_convs_0_1_conv_post_act_fake_quantizer = None
    roi_head_reg_convs_0_1_act_post_act_fake_quantizer = self.roi_head_reg_convs_0_1_act_post_act_fake_quantizer(roi_head_reg_convs_0_1_act);  roi_head_reg_convs_0_1_act = None
    roi_head_cls_preds_0 = getattr(self.roi_head.cls_preds, "0")(roi_head_cls_convs_0_1_act_post_act_fake_quantizer);  roi_head_cls_convs_0_1_act_post_act_fake_quantizer = None
    roi_head_cls_preds_0_post_act_fake_quantizer = self.roi_head_cls_preds_0_post_act_fake_quantizer(roi_head_cls_preds_0);  roi_head_cls_preds_0 = None
    roi_head_reg_preds_0 = getattr(self.roi_head.reg_preds, "0")(roi_head_reg_convs_0_1_act_post_act_fake_quantizer)
    roi_head_reg_preds_0_post_act_fake_quantizer = self.roi_head_reg_preds_0_post_act_fake_quantizer(roi_head_reg_preds_0);  roi_head_reg_preds_0 = None
    roi_head_obj_preds_0 = getattr(self.roi_head.obj_preds, "0")(roi_head_reg_convs_0_1_act_post_act_fake_quantizer);  roi_head_reg_convs_0_1_act_post_act_fake_quantizer = None
    roi_head_obj_preds_0_post_act_fake_quantizer = self.roi_head_obj_preds_0_post_act_fake_quantizer(roi_head_obj_preds_0);  roi_head_obj_preds_0 = None
    getitem_9 = getitem_7[1]
    getitem_9_post_act_fake_quantizer = self.getitem_9_post_act_fake_quantizer(getitem_9);  getitem_9 = None
    roi_head_stems_1_conv = getattr(self.roi_head.stems, "1").conv(getitem_9_post_act_fake_quantizer);  getitem_9_post_act_fake_quantizer = None
    roi_head_stems_1_conv_post_act_fake_quantizer = self.roi_head_stems_1_conv_post_act_fake_quantizer(roi_head_stems_1_conv);  roi_head_stems_1_conv = None
    roi_head_stems_1_act = getattr(self.roi_head.stems, "1").act(roi_head_stems_1_conv_post_act_fake_quantizer);  roi_head_stems_1_conv_post_act_fake_quantizer = None
    roi_head_stems_1_act_post_act_fake_quantizer = self.roi_head_stems_1_act_post_act_fake_quantizer(roi_head_stems_1_act);  roi_head_stems_1_act = None
    roi_head_cls_convs_1_0_conv = getattr(getattr(self.roi_head.cls_convs, "1"), "0").conv(roi_head_stems_1_act_post_act_fake_quantizer)
    roi_head_cls_convs_1_0_conv_post_act_fake_quantizer = self.roi_head_cls_convs_1_0_conv_post_act_fake_quantizer(roi_head_cls_convs_1_0_conv);  roi_head_cls_convs_1_0_conv = None
    roi_head_cls_convs_1_0_act = getattr(getattr(self.roi_head.cls_convs, "1"), "0").act(roi_head_cls_convs_1_0_conv_post_act_fake_quantizer);  roi_head_cls_convs_1_0_conv_post_act_fake_quantizer = None
    roi_head_cls_convs_1_0_act_post_act_fake_quantizer = self.roi_head_cls_convs_1_0_act_post_act_fake_quantizer(roi_head_cls_convs_1_0_act);  roi_head_cls_convs_1_0_act = None
    roi_head_cls_convs_1_1_conv = getattr(getattr(self.roi_head.cls_convs, "1"), "1").conv(roi_head_cls_convs_1_0_act_post_act_fake_quantizer);  roi_head_cls_convs_1_0_act_post_act_fake_quantizer = None
    roi_head_cls_convs_1_1_conv_post_act_fake_quantizer = self.roi_head_cls_convs_1_1_conv_post_act_fake_quantizer(roi_head_cls_convs_1_1_conv);  roi_head_cls_convs_1_1_conv = None
    roi_head_cls_convs_1_1_act = getattr(getattr(self.roi_head.cls_convs, "1"), "1").act(roi_head_cls_convs_1_1_conv_post_act_fake_quantizer);  roi_head_cls_convs_1_1_conv_post_act_fake_quantizer = None
    roi_head_cls_convs_1_1_act_post_act_fake_quantizer = self.roi_head_cls_convs_1_1_act_post_act_fake_quantizer(roi_head_cls_convs_1_1_act);  roi_head_cls_convs_1_1_act = None
    roi_head_reg_convs_1_0_conv = getattr(getattr(self.roi_head.reg_convs, "1"), "0").conv(roi_head_stems_1_act_post_act_fake_quantizer);  roi_head_stems_1_act_post_act_fake_quantizer = None
    roi_head_reg_convs_1_0_conv_post_act_fake_quantizer = self.roi_head_reg_convs_1_0_conv_post_act_fake_quantizer(roi_head_reg_convs_1_0_conv);  roi_head_reg_convs_1_0_conv = None
    roi_head_reg_convs_1_0_act = getattr(getattr(self.roi_head.reg_convs, "1"), "0").act(roi_head_reg_convs_1_0_conv_post_act_fake_quantizer);  roi_head_reg_convs_1_0_conv_post_act_fake_quantizer = None
    roi_head_reg_convs_1_0_act_post_act_fake_quantizer = self.roi_head_reg_convs_1_0_act_post_act_fake_quantizer(roi_head_reg_convs_1_0_act);  roi_head_reg_convs_1_0_act = None
    roi_head_reg_convs_1_1_conv = getattr(getattr(self.roi_head.reg_convs, "1"), "1").conv(roi_head_reg_convs_1_0_act_post_act_fake_quantizer);  roi_head_reg_convs_1_0_act_post_act_fake_quantizer = None
    roi_head_reg_convs_1_1_conv_post_act_fake_quantizer = self.roi_head_reg_convs_1_1_conv_post_act_fake_quantizer(roi_head_reg_convs_1_1_conv);  roi_head_reg_convs_1_1_conv = None
    roi_head_reg_convs_1_1_act = getattr(getattr(self.roi_head.reg_convs, "1"), "1").act(roi_head_reg_convs_1_1_conv_post_act_fake_quantizer);  roi_head_reg_convs_1_1_conv_post_act_fake_quantizer = None
    roi_head_reg_convs_1_1_act_post_act_fake_quantizer = self.roi_head_reg_convs_1_1_act_post_act_fake_quantizer(roi_head_reg_convs_1_1_act);  roi_head_reg_convs_1_1_act = None
    roi_head_cls_preds_1 = getattr(self.roi_head.cls_preds, "1")(roi_head_cls_convs_1_1_act_post_act_fake_quantizer);  roi_head_cls_convs_1_1_act_post_act_fake_quantizer = None
    roi_head_cls_preds_1_post_act_fake_quantizer = self.roi_head_cls_preds_1_post_act_fake_quantizer(roi_head_cls_preds_1);  roi_head_cls_preds_1 = None
    roi_head_reg_preds_1 = getattr(self.roi_head.reg_preds, "1")(roi_head_reg_convs_1_1_act_post_act_fake_quantizer)
    roi_head_reg_preds_1_post_act_fake_quantizer = self.roi_head_reg_preds_1_post_act_fake_quantizer(roi_head_reg_preds_1);  roi_head_reg_preds_1 = None
    roi_head_obj_preds_1 = getattr(self.roi_head.obj_preds, "1")(roi_head_reg_convs_1_1_act_post_act_fake_quantizer);  roi_head_reg_convs_1_1_act_post_act_fake_quantizer = None
    roi_head_obj_preds_1_post_act_fake_quantizer = self.roi_head_obj_preds_1_post_act_fake_quantizer(roi_head_obj_preds_1);  roi_head_obj_preds_1 = None
    getitem_10 = getitem_7[2];  getitem_7 = None
    getitem_10_post_act_fake_quantizer = self.getitem_10_post_act_fake_quantizer(getitem_10);  getitem_10 = None
    roi_head_stems_2_conv = getattr(self.roi_head.stems, "2").conv(getitem_10_post_act_fake_quantizer);  getitem_10_post_act_fake_quantizer = None
    roi_head_stems_2_conv_post_act_fake_quantizer = self.roi_head_stems_2_conv_post_act_fake_quantizer(roi_head_stems_2_conv);  roi_head_stems_2_conv = None
    roi_head_stems_2_act = getattr(self.roi_head.stems, "2").act(roi_head_stems_2_conv_post_act_fake_quantizer);  roi_head_stems_2_conv_post_act_fake_quantizer = None
    roi_head_stems_2_act_post_act_fake_quantizer = self.roi_head_stems_2_act_post_act_fake_quantizer(roi_head_stems_2_act);  roi_head_stems_2_act = None
    roi_head_cls_convs_2_0_conv = getattr(getattr(self.roi_head.cls_convs, "2"), "0").conv(roi_head_stems_2_act_post_act_fake_quantizer)
    roi_head_cls_convs_2_0_conv_post_act_fake_quantizer = self.roi_head_cls_convs_2_0_conv_post_act_fake_quantizer(roi_head_cls_convs_2_0_conv);  roi_head_cls_convs_2_0_conv = None
    roi_head_cls_convs_2_0_act = getattr(getattr(self.roi_head.cls_convs, "2"), "0").act(roi_head_cls_convs_2_0_conv_post_act_fake_quantizer);  roi_head_cls_convs_2_0_conv_post_act_fake_quantizer = None
    roi_head_cls_convs_2_0_act_post_act_fake_quantizer = self.roi_head_cls_convs_2_0_act_post_act_fake_quantizer(roi_head_cls_convs_2_0_act);  roi_head_cls_convs_2_0_act = None
    roi_head_cls_convs_2_1_conv = getattr(getattr(self.roi_head.cls_convs, "2"), "1").conv(roi_head_cls_convs_2_0_act_post_act_fake_quantizer);  roi_head_cls_convs_2_0_act_post_act_fake_quantizer = None
    roi_head_cls_convs_2_1_conv_post_act_fake_quantizer = self.roi_head_cls_convs_2_1_conv_post_act_fake_quantizer(roi_head_cls_convs_2_1_conv);  roi_head_cls_convs_2_1_conv = None
    roi_head_cls_convs_2_1_act = getattr(getattr(self.roi_head.cls_convs, "2"), "1").act(roi_head_cls_convs_2_1_conv_post_act_fake_quantizer);  roi_head_cls_convs_2_1_conv_post_act_fake_quantizer = None
    roi_head_cls_convs_2_1_act_post_act_fake_quantizer = self.roi_head_cls_convs_2_1_act_post_act_fake_quantizer(roi_head_cls_convs_2_1_act);  roi_head_cls_convs_2_1_act = None
    roi_head_reg_convs_2_0_conv = getattr(getattr(self.roi_head.reg_convs, "2"), "0").conv(roi_head_stems_2_act_post_act_fake_quantizer);  roi_head_stems_2_act_post_act_fake_quantizer = None
    roi_head_reg_convs_2_0_conv_post_act_fake_quantizer = self.roi_head_reg_convs_2_0_conv_post_act_fake_quantizer(roi_head_reg_convs_2_0_conv);  roi_head_reg_convs_2_0_conv = None
    roi_head_reg_convs_2_0_act = getattr(getattr(self.roi_head.reg_convs, "2"), "0").act(roi_head_reg_convs_2_0_conv_post_act_fake_quantizer);  roi_head_reg_convs_2_0_conv_post_act_fake_quantizer = None
    roi_head_reg_convs_2_0_act_post_act_fake_quantizer = self.roi_head_reg_convs_2_0_act_post_act_fake_quantizer(roi_head_reg_convs_2_0_act);  roi_head_reg_convs_2_0_act = None
    roi_head_reg_convs_2_1_conv = getattr(getattr(self.roi_head.reg_convs, "2"), "1").conv(roi_head_reg_convs_2_0_act_post_act_fake_quantizer);  roi_head_reg_convs_2_0_act_post_act_fake_quantizer = None
    roi_head_reg_convs_2_1_conv_post_act_fake_quantizer = self.roi_head_reg_convs_2_1_conv_post_act_fake_quantizer(roi_head_reg_convs_2_1_conv);  roi_head_reg_convs_2_1_conv = None
    roi_head_reg_convs_2_1_act = getattr(getattr(self.roi_head.reg_convs, "2"), "1").act(roi_head_reg_convs_2_1_conv_post_act_fake_quantizer);  roi_head_reg_convs_2_1_conv_post_act_fake_quantizer = None
    roi_head_reg_convs_2_1_act_post_act_fake_quantizer = self.roi_head_reg_convs_2_1_act_post_act_fake_quantizer(roi_head_reg_convs_2_1_act);  roi_head_reg_convs_2_1_act = None
    roi_head_cls_preds_2 = getattr(self.roi_head.cls_preds, "2")(roi_head_cls_convs_2_1_act_post_act_fake_quantizer);  roi_head_cls_convs_2_1_act_post_act_fake_quantizer = None
    roi_head_cls_preds_2_post_act_fake_quantizer = self.roi_head_cls_preds_2_post_act_fake_quantizer(roi_head_cls_preds_2);  roi_head_cls_preds_2 = None
    roi_head_reg_preds_2 = getattr(self.roi_head.reg_preds, "2")(roi_head_reg_convs_2_1_act_post_act_fake_quantizer)
    roi_head_reg_preds_2_post_act_fake_quantizer = self.roi_head_reg_preds_2_post_act_fake_quantizer(roi_head_reg_preds_2);  roi_head_reg_preds_2 = None
    roi_head_obj_preds_2 = getattr(self.roi_head.obj_preds, "2")(roi_head_reg_convs_2_1_act_post_act_fake_quantizer);  roi_head_reg_convs_2_1_act_post_act_fake_quantizer = None
    roi_head_obj_preds_2_post_act_fake_quantizer = self.roi_head_obj_preds_2_post_act_fake_quantizer(roi_head_obj_preds_2);  roi_head_obj_preds_2 = None
    update_2 = input_1_post_act_fake_quantizer.update({'preds': ((roi_head_cls_preds_0_post_act_fake_quantizer, roi_head_reg_preds_0_post_act_fake_quantizer, roi_head_obj_preds_0_post_act_fake_quantizer), (roi_head_cls_preds_1_post_act_fake_quantizer, roi_head_reg_preds_1_post_act_fake_quantizer, roi_head_obj_preds_1_post_act_fake_quantizer), (roi_head_cls_preds_2_post_act_fake_quantizer, roi_head_reg_preds_2_post_act_fake_quantizer, roi_head_obj_preds_2_post_act_fake_quantizer))});  roi_head_cls_preds_0_post_act_fake_quantizer = roi_head_reg_preds_0_post_act_fake_quantizer = roi_head_obj_preds_0_post_act_fake_quantizer = roi_head_cls_preds_1_post_act_fake_quantizer = roi_head_reg_preds_1_post_act_fake_quantizer = roi_head_obj_preds_1_post_act_fake_quantizer = roi_head_cls_preds_2_post_act_fake_quantizer = roi_head_reg_preds_2_post_act_fake_quantizer = roi_head_obj_preds_2_post_act_fake_quantizer = None
    return input_1_post_act_fake_quantizer

[MQBENCH] INFO: Enable observer and Disable quantize for act_fake_quant
[MQBENCH] INFO: Enable observer and Disable quantize for act_fake_quant
[MQBENCH] INFO: Enable observer and Disable quantize for act_fake_quant
Traceback (most recent call last):
  File "/opt/conda/lib/python3.8/runpy.py", line 194, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/opt/conda/lib/python3.8/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/data/lsc/United-Perception/up/__main__.py", line 27, in <module>
    main()
  File "/data/lsc/United-Perception/up/__main__.py", line 21, in main
    args.run(args)
  File "/data/lsc/United-Perception/up/commands/train.py", line 144, in _main
    launch(main, args.num_gpus_per_machine, args.num_machines, args=args, start_method=args.fork_method)
  File "/data/lsc/United-Perception/up/utils/env/launch.py", line 52, in launch
    mp.start_processes(
  File "/opt/conda/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 188, in start_processes
    while not context.join():
  File "/opt/conda/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 150, in join
    raise ProcessRaisedException(msg, error_index, failed_process.pid)
torch.multiprocessing.spawn.ProcessRaisedException: 

-- Process 2 terminated with the following error:
Traceback (most recent call last):
  File "/opt/conda/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 59, in _wrap
    fn(i, *args)
  File "/data/lsc/United-Perception/up/utils/env/launch.py", line 117, in _distributed_worker
    main_func(args)
  File "/data/lsc/United-Perception/up/commands/train.py", line 134, in main
    runner = RUNNER_REGISTRY.get(runner_cfg['type'])(cfg, **runner_cfg['kwargs'])
  File "/data/lsc/United-Perception/up/tasks/quant/runner/quant_runner.py", line 17, in __init__
    super(QuantRunner, self).__init__(config, work_dir, training)
  File "/data/lsc/United-Perception/up/runner/base_runner.py", line 59, in __init__
    self.build()
  File "/data/lsc/United-Perception/up/tasks/quant/runner/quant_runner.py", line 34, in build
    self.calibrate()
  File "/data/lsc/United-Perception/up/tasks/quant/runner/quant_runner.py", line 182, in calibrate
    self.model(batch)
  File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/data/lsc/United-Perception/up/tasks/quant/models/model_helper.py", line 76, in forward
    output = submodule(input)
  File "/opt/conda/lib/python3.8/site-packages/torch/fx/graph_module.py", line 308, in wrapped_call
    return cls_call(self, *args, **kwargs)
  File "/opt/conda/lib/python3.8/site-packages/torch/fx/graph_module.py", line 308, in wrapped_call
    return cls_call(self, *args, **kwargs)
  File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
    result = self.forward(*input, **kwargs)
  File "<eval_with_key_2>", line 4, in forward
    input_1_post_act_fake_quantizer = self.input_1_post_act_fake_quantizer(input_1);  input_1 = None
  File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/data/lsc/United-Perception/MQBench/mqbench/fake_quantize/fixed.py", line 20, in forward
    self.activation_post_process(X.detach())
AttributeError: 'dict' object has no attribute 'detach'
RedHandLM commented 2 years ago

I've produce a minimal code snippets

import torch
from torchvision.models import resnet18 

from mqbench.prepare_by_platform import BackendType, prepare_by_platform

class model(torch.nn.Module):
    def __init__(self) -> None:
        super().__init__()
        self.conv1 = torch.nn.Conv2d(3,3,3)
        self.conv2 = torch.nn.Conv2d(3,3,3)
        self.conv3 = torch.nn.Conv2d(3,3,3)
    def forward(self, x):
        data = x['img']
        x.update({'conv1': self.conv1(data)})
        x.update({'conv2': self.conv2(data)})
        x.update({'conv3': self.conv3(data)})
        return x 

test_model = model()
test_model = prepare_by_platform(test_model, BackendType.Tengine_u8)
print(test_model)
test_model({'img': torch.rand(1,3,224,224)})

And I fixed it by https://github.com/PannenetsF/MQBench/tree/tu8

我使用了最小代码块进行运行,但是提示我找不到tengine_u8的key

[MQBENCH] INFO: Quantize model Scheme: BackendType.Tengine_u8 Mode: Training
[MQBENCH] INFO: Weight Qconfig:
    FakeQuantize: LearnableFakeQuantize Params: {}
    Oberver:      MinMaxObserver Params: Symmetric: False / Bitwidth: 8 / Per channel: False / Pot scale: False / Extra kwargs: {}
[MQBENCH] INFO: Activation Qconfig:
    FakeQuantize: LearnableFakeQuantize Params: {}
    Oberver:      EMAMinMaxObserver Params: Symmetric: False / Bitwidth: 8 / Per channel: False / Pot scale: False / Extra kwargs: {}
odict_keys([<BackendType.NNIE: 'NNIE'>, <BackendType.Tensorrt: 'Tensorrt'>, <BackendType.Academic: 'Academic'>, <BackendType.OPENVINO: 'OPENVINO'>, <BackendType.Vitis: 'Vitis'>, <BackendType.PPLW8A16: 'PPLW8A16'>, <BackendType.SNPE: 'SNPE'>, <BackendType.PPLCUDA: 'PPLCUDA'>, <BackendType.Tensorrt_NLP: 'Tensorrt_NLP'>, <BackendType.Tengine_u8: 'Tengine_u8'>, <BackendType.ONNX_QNN: 'ONNX_QNN'>, <BackendType.Academic_NLP: 'Academic_NLP'>])
Traceback (most recent call last):
  File "/data/lsc/United-Perception/tengine_u8/convert_tengine_u8.py", line 21, in <module>
    test_model = prepare_by_platform(test_model, BackendType.Tengine_u8)
  File "/data/lsc/United-Perception/MQBench/mqbench/prepare_by_platform.py", line 397, in prepare_by_platform
    quantizer = DEFAULT_MODEL_QUANTIZER[deploy_backend](extra_quantizer_dict, extra_fuse_dict)
KeyError: <BackendType.Tengine_u8: 'Tengine_u8'>
PannenetsF commented 2 years ago

检查下PYTHONPATH?

RedHandLM commented 2 years ago

重新修改了路径 复现了标题中的问题

RedHandLM commented 2 years ago

最小代码块错误日志.log

PannenetsF commented 2 years ago

切换到 https://github.com/PannenetsF/MQBench/tree/tu8 了么,这个是可以解决的 image

RedHandLM commented 2 years ago

切换过来了,指定了新路径,但是没有解决 image

RedHandLM commented 2 years ago

切换到 https://github.com/PannenetsF/MQBench/tree/tu8 了么,这个是可以解决的 image

解决了问题,非常感谢