-
Hi all, when I run conversion of the following operation:
```python
x_padded = torch.nn.functional.pad(x, (0, 0, pad_left, pad_right))
```
I get the error below:
```bash
AttributeError: 't…
-
Thank you very much for your work.
1. I obtained ONNX file using 'demo_pytorch2onnx.py'.
2. ONNX converted into TensorRT(.engine) file through DeepStream 5.0-devel.
3. The TensorRT(.engine) f…
KoPub updated
4 years ago
-
Hello,
I've been attempting to deploy the mask2former_flash_internimage_s_640_160k_ade20k_ss model using the provided deploy.py script from the internimage repository, located in the segmentation f…
-
## Bug Description
cannot load quantize_fp8 even though the modelopt[all] installed
```
WARNING:torch_tensorrt.dynamo.conversion.aten_ops_converters:Unable to import quantization op. Please in…
-
Hi naisy i tried to use the tensorRT implementation of your code on my jetson tx2.
(in config.yml model_type: 'trt_v1')
but i get the following error:
File "run_image.py", line 131, in main
…
-
Hi! Thanks for making this tensorrt conversion, it is really fast!!
Is there a way to run the inference on image sequence instead of single frame?
Thanks!
-
## Description
when I try to build pytorch_quantization from source following readme.md, building error happens in tensor_quant_gpu.cu. and I think maybe some version problem
## Environment
**T…
-
My use case scenario is deploying model inference services in the cloud, utilizing GPU virtualization technology to split one GPU into multiple instances. Each instance runs a model, and since one car…
-
## Bug Description
The output shape of `aten::_convolution` no longer matches pytorch after the TensorRT 10 upgrade.
I have noticed that the output shape is correct when I pass in the weight …
-
## Bug Description
I can't compile this model and the error seems to be caused by `nn.BatchNorm3d`
## To Reproduce
Steps to reproduce the behavior:
1. Init the model after importing the…