-
Thanks for sharing great code!
However, I am having trouble getting an error when converting deeplabv3 models with torch2trt.
--> "inference segmentation.ipynb"
The backbone alone such as resne…
-
### Search before asking
- [X] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar bug report.
### Ultralytics YOLO Component
Expo…
-
**PaddleDetection team appreciate any suggestion or problem you delivered~**
## Checklist:
1. 查找历史相关issue寻求解答/I have searched related issues but cannot get the expected help.
2. 翻阅[FAQ](https:/…
-
## Description
yolov5s base pytorch-quantization
reference https://github.com/maggiez0138/yolov5_quant_sample
onnx->fp16 3ms
qat->onnx->int8 4ms
why? please tell me,thanks.
[onnx fil…
-
Now I provide a more simple code for nms plugin , please enjoy it
https://github.com/Linaom1214/tensorrt-python
-
# Deploy on NVIDIA Jetson using TensorRT and DeepStream SDK
This guide explains how to deploy a trained model into NVIDIA Jetson Platform and perform inference using TensorRT and DeepStream SDK. He…
-
Which model do you want to support in mmdeploy? Comment here. You can like a comment to increase model priority.
| | Model | Codebase | Backend | Number of likes | Released
| :--- |…
-
### Describe the issue
Hello,
I'm trying to quantize an ONNX model to INT8 using the ONNX Runtime tools provided [here](https://github.com/microsoft/onnxruntime/blob/main/onnxruntime/python/tools/…
-
I would like to quantize my model to INT8 precision and then compile it using torch_tensorrt.
Unfortunately, it is [transformer based vision model](https://github.com/mit-han-lab/efficientvit/blob/ma…
-
I have model
```python
class MLP(nn.Module):
def __init__(self) -> None:
super().__init__()
d_model, d_ff = 512, 2048
self.lin1 = nn.Linear(d_model, d_ff)
se…