-
### 问题确认 Search before asking
- [X] 我已经搜索过问题,但是没有找到解答。I have searched the question and found no related answer.
### 请提出你的问题 Please ask your question
ppdet.engine INFO: Epoch: [0] [ 400/14658] lea…
-
## Checklist:
1. 查找[历史相关issue](https://github.com/PaddlePaddle/PaddleX/issues)寻求解答
2. 翻阅[FAQ常见问题汇总和答疑](https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/FAQ/FAQ.md)
3. 确认bug是否在新版本里还未修复
4…
-
Hey,
How to do quantization to int8 model of img_stage_lt_d.onnx and bev_stage_lt_d.onnx?
Thanks!
-
Hi neuralmagic team !
Very nice work with AutoFP8 ! We were thinking of integrating AutoFP8 in transformers, so that users can run your checkpoints directly with transformers ! We would simply rep…
-
(pytorch1) maziyi@kpyf:~/python/yolox5/micronet/micronet/compression/quantization/wqaq/iao$ python main.py --resume "/home/maziyi/python/yolox5/yolox_best.pth" --q_type 0 --q_level 0 --bn_fuse --qaft …
-
请问量化后,分类准确率衰减与其它下游任务如目标检测、语义分割等,效果会基本一致吗?或者作者您对目前大多数的量化工作,只在分类上验证效果,有什么见解吗?
-
**Describe the bug**
非常痛苦 动态shape根本转不出来
**To Reproduce**
```python
import nncase
import numpy as np
import onnx
import onnxsim
# from nncase_base_func import model_simplify, read_model_fil…
-
### 🐛 Describe the bug
fp32 static shape default wrapper
suite
name
thread
batch_size_new
speed_up_new
inductor_new
eager_new
compi…
-
@saberkun @renjie-liu
I would like to convert mobilebert to tflite format.
I use the quantized weight you offered in the repo with tensorflow 1.15. https://github.com/google-research/google-researc…
-
I am trying to quantize a pytorch model using NNCF.
The output of my model is a concatenation of two tensors.
To quantize my outputs I set:
`advanced_parameters = AdvancedQuantizationParameters(q…