-
Hello,jaybdub.
Thanks for you code.
I am trying to accelerate the trained Pytorch model inference process with Tensorrt
but got nothing useful from searched blogs.
I found you have done this work …
-
Hello @CarkusL I have tried you method for converting centerpoint into onnx and then into tensorRT. Is there a way can visualize the engine file output or calculating the validation accuracy of the en…
-
goog job!
-
Hi~ @tianzhi0549
I trained Fcos-vovnet39 with CrowdHuman dataset, and got a not bad result.
Now I want to convert my pytorch model to an onnx model or a TensorRT model.
I read the detectron2 docu…
-
May I ask is the performance of tensorRT-ssd much worse than caffe-ssd with the same weight ?
SYVAE updated
6 years ago
-
Hi,
When running benchmarking with GPU and ort_tensorrt backend: `./bench_model.sh ./stsb-xlm-r-multilingual.bert.opt.onnx --repeat=100 --number=1 --warmup=10 --device=gpu --ort-tensorrt`, I got th…
-
### System Info
CPU x86_64
GPU NVIDIA L20
TensorRT branch: v0.8.0
CUDA: NVIDIA-SMI 535.154.05 Driver Version: 535.154.05 CUDA Version: 12.3
### Who can help?
@Tracin
### Information
- [X…
-
Greetings, I have been searching for a way to run koyha_ss on the Jetson AGX Orin within the nvidia container, so it will utilize the GPU. After copying this git and running the docker compose line, t…
-
Firstly, thank you for sharing your code.
To build your tensorrt plugin, I tried `cmake .. -DTENSORRT_PREFIX_PATH='/PCDet/tools/tensorrt_utils/TensorRT-8.5.3.1 && make'`. However, I got the followi…
-
tensorrt不支持64位精度的ONNX模型:
make yolo时报错:
[2024-04-01 20:28:03][error][trt_infer.cpp:23]:NVInfer: src/tensorRT/onnx_parser/ModelImporter.cpp:739: --- End node ---
[2024-04-01 20:28:03][error][trt_infe…