-
### Before Asking
- [X] I have read the [README](https://github.com/meituan/YOLOv6/blob/main/README.md) carefully. 我已经仔细阅读了README上的操作指引。
- [X] I want to train my custom dataset, and I have read the …
-
Release manager: @liuzhe-lz
Release start date: 10.31
Feature freeze date (at most 30 days): ~~2.10~~ 2.20
Code freeze date & first package: ~~2.17~~ ~~2.24~~ 2.28
Tutorial freeze: ~~2.24~~ 3.3
…
-
## Description
**Backgroud**: I was quantify yolov7.onnx to trt int8, and the mAP was reduced by 10%, so I used polygraph to find out which layer went wrong during the optimization.
**This is my…
-
Recently, I quantized a pre-trained ResNet50 model from fp32 to int8, and I noticed that the performance isn't what I expected. The performance is only about 2x compared to the equivalent fp32 model. …
-
Hi @davidbriand-cea , @cmoineau,
It appears from the documentation that Reshape is compatible
when I run it: `sudo n2d2 model.ini -seed 1 -w /dev/null -export CPP -nbbits 8 -db-export 1000 -expo…
-
May I ask why there is no int8 quantization for sequence=64? Is it because the performance is not improved compared to fp16?
Thank you!
-
现在可以通过PTQ/QAT生三个文件,包括两个onnx和一个json,那么该如何变成tensorrt的文件格式呢?
-
BUG :
Traceback (most recent call last):
File ".\export.py", line 32, in
convert.convert()
File "F:\workspace\python\目标检测\detection\convert_model\convert_base.py", line 112, in convert
…
-
### Feature request
当前似乎只有dynabert有evaluation, 但是后续的ptq, qat都没有. 能否考虑在所有策略执行完毕后进行统一的evaluation, 然后将关键信息(metrics, export model path, model size (MACs or memory footprint)) log下来并且作为return arguments返回?…
-
## Description
i use tensorrt 8.3 to accomplish ptq quanzation for my model,but encountered error,the detailed log is as follows
, I don't know what this error means.
set kINT8 at layer[0]Conv_0[…