-
## Description
I have only FP16 onnx file of NVIDIA the stanford_resnext50.onnx from deepstream sdk.
Now I'm trying to make int8 calibration cache of this model for making the FPS more faster.
th…
-
您好,
我在对模型进行量化训练后,使用命令:
```
python tools/infer.py -c configs/ppyolo/ppyolo_r50vd_dcn_1x_coco.yml --slim_config configs/slim/quant/ppyolo_r50vd_qat_pact.yml -o weights=output/ppyolo_r50vd_qat_pact/…
-
## Description
I'm using PyTorch-Quantization Toolkit to create a QAT model, conv2d model works well, but when I using trtexec to convert a quantized conv3d model failed with the following error:
In…
-
![image](https://user-images.githubusercontent.com/32217129/117620064-a8381c80-b1a2-11eb-8896-58085b4e2c80.png)
上面是采用感知量化训练出来的结果。命令为: python slim/quantization/eval.py -c configs/ssd/ssdlite_mobilene…
-
**Motivation**
As a part of the regression test, the performance of a deployed model should be checked if it is consistent with the performance of its PyTorch model. In other words, MMDeploy should…
-
### bug描述 Describe the Bug
TensorRT版本: TensorRT-8.4.0.6
python版本: py3.7
PaddlePaddle-gpu: 2.2.2
系统: ubuntu 18.04
import numpy as np
import paddle.inference as paddle_infer
def create_…
-
## Bug Description
When I used torch.split() in the model code, it will fails when trying to convert with Torch-TRT.
Complete error messages:
```bash
torch.Size([1000, 2048])
Successfully load…
-
## Description
I'm trying to run PeopleSegNet (from Tao toolkit with Deepstream), I got this error.
```
wsadmin@AIML1001:/opt/nvidia/deepstream/deepstream/samples/configs/tao_pretrained_model…
-
### System information
- **Have I written custom code (as opposed to using a stock example script provided in TensorFlow)**:
No
- **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**:
Ubunt…
-
[2022-05-17 11:21:22.741] [mmdeploy] [error] [device_impl.cpp:147] 0, -1
[2022-05-17 11:21:22.742] [mmdeploy] [error] [detector.cpp:58] exception caught: invalid argument (1) @ :0
failed to create d…