-
1. model : Mobilenet V1, Mobilenet V2, Inception V3, Yolo V5
2. dataset : tfrecord imagenet (1000장)
3. GPU device : Nvidia Jetson TX1, TX2, Xavier, Nano
4. TPU device : rpi-4 + Coral TPU
-
### Search before asking
- [X] I have searched the Yolov5_StrongSORT_OSNet [issues](https://github.com/mikel-brostrom/Yolov5_StrongSORT_OSNet/issues) and [discussions](https://github.com/mikel-bros…
-
## Description
I am trying to build TensorRT on the Linux x86 architecture, but I am not able to build it.
## Environment
**TensorRT Version**: 8.0.3
**NVIDIA GPU**:
**NVIDIA Driver Versi…
-
### Checklist
- [X] I have searched related issues but cannot get the expected help.
- [X] 2. I have read the [FAQ documentation](https://github.com/open-mmlab/mmdeploy/blob/master/docs/en/faq.md) bu…
-
@ttyio hello,
## Description
I followed the examples here:
1) calibrate and finetune: https://docs.nvidia.com/deeplearning/tensorrt/pytorch-quantization-toolkit/docs/tutorials/quant_resnet50.ht…
-
I generate cache engine by onnxruntime+tensorrt EP, but the size of int8 model and fp16 model are same. But when i use trtexec to generate int8 engine, the model size seems correctly. I want to know…
-
## Description
outputs of onnx to tensorrt are different from outputs of onnx,I want to know how to set the layer to fp32 in setFlag(nvinfer1::BuilderFlag::kFP16)?
## Environment
**TensorRT…
-
## Bug Description
There is an error when using the [official quantization notebook](https://github.com/pytorch/TensorRT/blob/master/notebooks/vgg-qat.ipynb) in this repository, with the official …
-
Hello @marcoslucianops .,Thank you for sharing your work. In `MULTIPLE-INFERENCES.MD` file, what is meant by primary inference and secondary inference? I mean what's the difference between them? And I…
-
## Description
So I used the [PTQ sample code](https://github.com/NVIDIA/TensorRT/blob/master/tools/pytorch-quantization/examples/calibrate_quant_resnet50.ipynb) to do quantization from fp16 to int8
…