-
## Description
cmake .. -DCUDA_VERSION=10.2
```
Building for TensorRT version: 8.2.4, library version: 8
-- Targeting TRT Platform: x86_64
-- CUDA version set to 10.2
-- cuDNN version set to 8…
-
It is my understanding that the new stable release should be able to convert any PyTorch model with fallback to PyTorch when operations cannot be directly converted to TensorRT. I am trying to convert…
-
I am running jetson-inference on Drive PX2(consists of 2 TX2), however it seems that this platform does not support FP16 (which should support), I checked the code and it comes with this:
`builder-…
-
## Bug Description
```
ERROR: [Torch-TensorRT] - Unsupported operator: aten::where.self(Tensor condition, Tensor self, Tensor other) -> (Tensor)
/opt/conda/lib/python3.8/site-packages/transforme…
-
1)PaddlePaddle版本:PaddlePaddle-gpu-2.1.2、TensorRT-6.0.1.5
3)GPU:T4、CUDA10.1、CUDNN7.6
4)系统环境:Python3.6
-预测信息
1)Python预测
问题描述:使用Paddle-TRT 运行paddleslim量化后的模型,设置precision_mode=AnalysisConfig.Precis…
-
### Describe the issue
Hi, I tried to use QDQ Format to quantize my onnx model and use trtexec to benchmark its inference speed. And I met a problem similar to #11535. After I add `extra_options={'Ad…
-
Hi, a quick question. I implemented the naive PTQ algorithm using MQBench and exported the onnx model. The backend is tensorRT. But I am confused that the `clip_ranges.json` file is empty:
```json
{…
-
您好,我在做 infer 的时候,一直报一下错误:
test_tipc/test_train_inference_python.sh: line 29: [: =: unary operator expected
python3.7: can't open file 'gpu': [Errno 2] No such file or directory
Run failed with com…
-
python mmdeploy/tools/deploy.py \
mmdeploy/configs/mmdet/detection/detection_tensorrt_dynamic-320x320-1344x1344.py \
mmdeploy/detectors_cascade_rcnn_r50_1x_coco.py \
mmdeploy/detectors_…
-
1)PaddlePaddle版本:PaddlePaddle-gpu-1.8.2、TensorRT-6.0.1.5
3)GPU:1080Ti、CUDA9.0、CUDNN7.6
4)系统环境:Python3.6.6
-预测信息
1)Python预测,CentOS7系统下编译预测库
- 问题描述:使用Paddle-TRT 运行paddleslim量化后的模型,设置precision_mode=…