-
When I use aimet autoquant to quant my model, I met the following issues:
- Prepare Model
Traceback (most recent call last):
File "/workspace/aimet/build/staging/universal/lib/python/aimet_torch/…
-
Hi,
I got 4% accuracy drop in QAT trained centernet model wrapped up with Xnn. QuantTrainModule. I'd tried with other utilites functions as you mentioned like " xnn.utils.freeze_bn(model) and xnn.l…
-
Hi!
Thank you for the paper! It is inspiring that you can compress weights to about 1 bit and the model still works better than random.
A practical sub-2-bit quantization algorithm would be a grea…
-
Hi, I was following along the post training quantization,
and I am wondering if given examples codes can convert yolov5m as well.
The given yaml for yolov5 is yolov5s_ptq.yaml, so is the code spec…
-
Hello, I build Bolt(tag: v1.5.1) with the linux-x86_64_avx512 version, and convert onnx model to PTQ version by X2bolt.Then try post_training_quantization to quantize it to int8 precision. I follow th…
-
Hi, converting a model that uses `nn.RMSNorm` does not work:
```python
class RMSNormModel(nn.Module):
def __init__(self):
super().__init__()
self.norm = nn.RMSNorm(3, 0.1)…
-
### Before Asking
- [X] I have read the [README](https://github.com/meituan/YOLOv6/blob/main/README.md) carefully. 我已经仔细阅读了README上的操作指引。
- [X] I want to train my custom dataset, and I have read the …
-
When performing Resnet18 PTQ using TRT-modelopt, I encountered the following issue when compiling the model with TRT.
First off, I started with a pretrained resnet18 from torchvision. I replaced t…
-
![image](https://user-images.githubusercontent.com/102579571/202637067-32524541-f681-4f35-b76a-6ccb6cfae947.png)
-
Thank you for your excellent work on this project. It's the first work I met that implements PTQ based on diffuser library.
However, quantizing is still time-consuming. Could you kindly provide train…