-
## ❓ Question
I have a PTQ model and a QAT model trained with the official pytorch API following the quantization tutorial, and I wish to deploy them on TensorRT for inference. The model is metaforme…
-
Thank you very much for your work.
1. I obtained ONNX file using 'demo_pytorch2onnx.py'.
2. ONNX converted into TensorRT(.engine) file through DeepStream 5.0-devel.
3. The TensorRT(.engine) f…
KoPub updated
4 years ago
-
My env:
gpu nvidia 4090
system windows
cuda 12.4
cudnn 9.1
I migrated onnxruntime code for grid_sample 5D from liqun/imageDecoder_cuda branch to the main branch and compiled.
code is here ht…
-
Hello,
I've been attempting to deploy the mask2former_flash_internimage_s_640_160k_ade20k_ss model using the provided deploy.py script from the internimage repository, located in the segmentation f…
-
Your onnx2trt.py contains code that compatible only with old TensorRT and has a lot of deprecated functions and properties
-
### Issue Type
Performance
### Source
binary pypi
### Tensorflow Version
2.10.0
### Custom Code
No
### OS Platform and Distribution
Linux Ubuntu 18.04
### Python ve…
-
Hi all, when I run conversion of the following operation:
```python
x_padded = torch.nn.functional.pad(x, (0, 0, pad_left, pad_right))
```
I get the error below:
```bash
AttributeError: 't…
-
Hi naisy i tried to use the tensorRT implementation of your code on my jetson tx2.
(in config.yml model_type: 'trt_v1')
but i get the following error:
File "run_image.py", line 131, in main
…
-
### Search before asking
- [X] I have searched the HUB [issues](https://github.com/ultralytics/hub/issues) and [discussions](https://github.com/ultralytics/hub/discussions) and found no similar quest…
-
### System Info
- CPU architecture: x86_64
- Host memory: 256GB
- GPU
+ Name: NVIDIA A30
+ Memory: 24GB
- Libraries
+ TensorRT-LLM: v0.11.0
+ TensorRT: 10.1.0
+ CUDA: 12.6
+ NVID…