-
Hi,
I have an RT-DETR (l) model trained using ultralytics. Naturally, I used the ultralytics-rtdetr script to get an ONNX file. As I wanted a dynamic batch size, I use
```console
python3 rtdetr_…
-
### 🐛 Describe the bug
Facing this issue when we are saving the Torch-TensorRT compiled module using torch.export.save(). Here's a link to our exporter code. https://github.com/pytorch/TensorRT/blob/…
-
你好,我在将tusimple_res18.pth转化为tusimple_res18.engine,但是在tusimple数据集上的测试结果非常差?麻烦帮我看看转换有什么问题,非常感谢!首先我利用deploy/pt2onnx.py --config_path configs/tusimple_res18.py --model_path weight/tusimple_res18.pth将其转换为tu…
-
### System Info
DGX H100
### Who can help?
when build engine with :
```sh
trtllm-build --fast-build --model_config $model_cfg
```
and then benchmark with gptMangerBenchmark, it repo…
-
I trained yolov7 tiny for 4 classes for recognition of face landmarks face, eye, nose and mouth further i am trying to convert .pt --> .onnx and facing this warning
![Screenshot 2024-04-04 160704](h…
-
感谢您的优秀工作!
最近我在尝试在Jetson Orign NX上使用TensorRT对Depth Anything进行加速,但是我发现转换后的trt文件的推理速度和onnx文件相比并没有显著提升,甚至还有下降。其中:
```
ONNX Inference Time: 2.7s per image
```
```
TRT Inference Time: 3.0s per image
…
-
### Describe the issue
I got large-scale test failures in the test process.
failure log is provided as the following full-log.
I've tried the TensorRT version from 10.2 to 10.5 to build onnxruntime-…
-
I'm trying to write a unit test for flash attention using version 0.14.0.dev2024100100.
I noticed that `host_runtime_perf_knobs ` is a new feature in recent versions. Here are how I use it and the re…
-
### Describe the issue
The symptoms I see is that if I delete the cache it takes 9 seconds to regenerate the cache files. If I create more engines for the same model in the same process it takes 40 m…
-
## Bug Description
when i use torch_tensorrt.compile transformer module with dynamic_shapes, it will occur this error,
## To Reproduce
def test_compile_v1():
model = AutoModel.from_pretrai…