-
I tested multiple models of deepstream-yolo, and the effect of int8 can be said to be very poor. I also tested qat on the classification model and it worked perfectly. If I generate trt weights using …
-
### Search before asking
- [X] I have searched the YOLOv6 [issues](https://github.com/meituan/YOLOv6/issues) and found no similar feature requests.
### Description
Hi YOLOv6 Team,
I am currentl…
-
* Software
* QAT driver: QAT20.L.1.1.50-00003
* QAT Engine: v1.6.0
* openssl: OpenSSL 1.1.1k FIPS 25 Mar 2021
* Hardware
* Xeon server with 2 sockets INTEL(R) XEON(R) GOLD 6554S
* …
-
I have trained camera+lidar based on SwinT, I have got a pth. How can I generate PTQ model or FP16 model?
I used the command "python qat/ptq.py --config=configs/nuscenes/det/transfusion/secfpn/came…
-
i get a quantized model using torchtune package
The test log show me: INFO:torchtune.utils._logging:Time for inference: 66.56 sec total, 4.51 tokens/sec
4.51 tokens/sec is even lower than that of th…
-
Converting this dummy model with quantize_target_type="int8" and per_tensor=True throws an error in tflite
```python
import torch.nn as nn
import torch
from tinynn.graph.quantization.quantizer …
-
When I try to install Fermioniq's emulator (https://docs.fermioniq.com/UserGuide/Setup/installation.html), it returns errors related to packages incompatibilities:
```
The current project's suppor…
-
@venkatesh6911 @Yogaraj-Alamenda
[root@hostname QAT_Engine-1.6.0]# make -j 4
make err-files && make all-am
......
QAT: 332 new reasons
make[1]: Leaving directory '/tmp/QAT_Engine-1.6.0'
make[1…
-
From the tutorials and recipes it looks like you can only do dynamic Int8 Int4? Also I cannot export the trained model to onnx?
```
import torch
from torchao.quantization.prototype.qat import I…
-
Hi,
I would like to do model inference and pytorch to onnx conversion of custom object detection model(not in mmdetection) after QAT. Can you please help me in sharing sample code for the same.
…