-
hello, I use the default config files in '/application/imagenet_example/PTQ/configs' and your pre_trained models, but I can't get the results in the paper on w2a2_resnet18 and w2a2_mobilenetV2 etc...
-
### Before Asking
- [X] I have read the [README](https://github.com/meituan/YOLOv6/blob/main/README.md) carefully. 我已经仔细阅读了README上的操作指引。
- [X] I want to train my custom dataset, and I have read …
-
Hi everyone,
I was developing a computer vision pipeline on Axis camera model Q1656-LE Box. I installed Axis ACAP and Axis computer vision SDK using Docker and everything is functional when I use t…
-
Hi there, LOG_SOFTMAX isn't being quantized to INT8. The converter adds a dequantize layer before the LOG_SOFTMAX node. This is not the behavior when using the regular converter from tensorflow (their…
-
Hi again,
I'm currently experimenting with Quantization and see that the PostQuantizer puts models into training mode before tracing the graph. For some models I've experimented with this can cause…
-
## ❓ Question
I am getting a linking error when using `torch_tensorrt::ptq::make_int8_calibrator`. I am using the Windows build based on CMake, so I'm not sure if it's a problem with the way it was…
-
## Description
So I used the [PTQ sample code](https://github.com/NVIDIA/TensorRT/blob/master/tools/pytorch-quantization/examples/calibrate_quant_resnet50.ipynb) to do quantization from fp16 to int8
…
-
## Description
So I did int8 calibration on yolov3 onnx model and was expecting at least 30% speed improvement. However, inference time difference is negligible.
![compilation](https://user-image…
-
代码如下,如果设置deploy_to_qlinear=True会报错,deploy_to_qlinear=False不会报错:
from models.experimental import attempt_load
from mqbench.prepare_by_platform import prepare_by_platform, BackendType
from mqbench.…
-
Update of core-admin to v4.2.16 for Qubes OS r4.2, see comments below for details and build status.
From commit: https://github.com/QubesOS/qubes-core-admin/commit/fee166ee37188a88c4805b898f4054453fe…