-
### Describe the issue
I do a qat quantization on a cnn model, when a export it to onnx model, and got a slower inference than torchscript qat model.
the result is
torchscript: 4.798517942428589 …
-
I have one subclass A, which contains another subclass B inside it. I find subclass B in fact didn't get quanted.
class A ():
self.layers=[]
self.layers.append(Dense)
self.layers.appe…
t7hua updated
2 years ago
-
### 💡 Your Question
I tuned the quantization weights during training and got an onnx model with Q/DQ layers as output. However, when I use TensorRt to convert a file to an engine with int8 precision,…
-
Prior to filing: check that this should be a bug instead of a feature request. Everything supported, including the compatible versions of TensorFlow, is listed in the overview page of each technique. …
-
Hi, converting a model that uses `nn.RMSNorm` does not work:
```python
class RMSNormModel(nn.Module):
def __init__(self):
super().__init__()
self.norm = nn.RMSNorm(3, 0.1)…
-
mmrazor -version :1.0.0rc2
-
When running make install I see sh script error /bin/sh: line 103: [: ==: unary operator expected.
Not sure what is failing after that, but when I try to build QAT openssl engine after that I'm ge…
-
**Ⅰ Issue Description**
1.tengine国密开启异步模式(ssl_async on+修改tengine代码以实现),并且使用cryptoNI指令集的方式, 加速SM2、SM3、SM4
2. 发送国密握手请求./openssl s_client -connect 127.0.0.1:443 -cipher ECC-SM2-WITH-SM4-SM3 -enable…
-
### Describe the issue
The model weight is quantified per channel:
- weight_scale.shape=[64,],
- zero_point.shape=[64].
When using onnxruntime-train to do QAT, the following error is reporte…
-
I am trying to quantize a custom yolov5 model using quantization aware training method. I have applied all the model adjustments as provided in the vitis ai user guide v3.0 and wrote the QAT code base…