-
### Describe the issue
Hi, I am using ONNX Runtime with TensorRT Execution Provider for a quantized model (YOLO-NAS). While TensorRT cli (trtexec.exe) successfully build the engine from onnx model, t…
-
### Search before asking
- [X] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/ultralytics/ultralytics/discussi…
-
### System Info
- GPU (Nvidia GeForce RTX 4070 Ti)
- CPU 13th Gen Intel(R) Core(TM) i5-13600KF
- 32 GB RAM
- 1TB SSD
- OS Windows 11
Package versions:
- TensorRT version 9.2.0.post12.dev5
…
-
I tried building an image with ROS2 Jazzy and Jax and it seems that OpenCV failed to install which stops me from creating the image.
I have also tried Humble, and it also fails on the same step if t…
-
Hi again,
I've successfully quantized an onnx model to int8, then converted to tensorrt engine and noticed the performance increase compared to fp16.
```bash
python -m modelopt.onnx.quantizati…
-
### System Info
GPU: rtx8000
Diver version: 525.85.05
Cuda version: 12.0
Syetem: ubuntu20.04
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own mod…
-
## Description
I am trying to convert onnx model to int8 with latest TensorRT. I got the following error:
```
[05/19/2023-14:42:31] [E] Error[2]: Assertion getter(i) != 0 failed.
[05/19/2023-14…
-
Hi TensorRT-LLM team, Your work is incredible.
By following the READme file for [multi-modeling](https://github.com/NVIDIA/TensorRT-LLM/blob/main/examples/multimodal/README.md), we were sucess to run…
-
## Description
I generated calibration cache for Vision Transformer onnx model using EntropyCalibration2 method. When trying to generate engine file using cache file for INT8 precision using trte…
-
### System Info
- CPU: X86
- GPU: NVIDIA L20
- python
- tensorrt 10.3.0
- tensorrt-cu12 10.3.0
- tensorrt-cu12-bindings 10.3.0
- tensorrt-cu12-libs 10…