-
I get this error in Tesla T4, using the following code:
```
[TensorRT] ERROR: No non-int8 implementation of layer [CONVOLUTION #1]
CUDA_VISIBLE_DEVICES=0 python infer.py --m resnet18 --load_ckpt …
KeyKy updated
3 years ago
-
### System Info
- Ubuntu
- GPU A100 / 3090 RTX
- docker nvcr.io/nvidia/tritonserver:24.02-trtllm-python-py3
- Python tensorrt-llm package (version 0.9.0.dev2024030500) installed in the docker im…
-
## Description
I attempted to compile a Hugging Face model (the Hugging Face model link is: https://huggingface.co/OpenGVLab/InternViT-6B-448px-V1-5, which includes both the model architecture code …
-
* Ubuntu 22.04
* Gcc 11.4.0
* torch 2.2.1+cu121
* python3.9
when I run
```bash
python waymo_preprocess.py --data_root data/waymo/raw/ --target_dir data/waymo/processed --split trai…
-
### Search before asking
- [X] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar bug report.
### Ultralytics YOLO Component
…
-
## Description
## Environment
**TensorRT Version**: 8.5
**NVIDIA GPU**: Jetson Orin Nano developer kit 8gb
**NVIDIA Driver Version**:
**CUDA Version**:11.4
**CUDNN Version…
-
Hi Ryan,
I am trying to create a calibration file for the ResNet-18 Caffe model. You have mentioned the below statement in another issue:
_I have created a reference for INT8 calibration on Ima…
-
我按照文档要求已经按照了包含trt的paddle,
paddlepaddle_gpu-2.2.1-cp37-cp37m-linux_x86_64.whl
在设置参数--run_mode=trt_int8 --trt_calib_mode=True跑picodet导出模型的的infer.py时候报错
File "/home/vehicle_detection/PaddleDetecti…
-
After successful quantizing and exporting ONNX models for ResNet18, using 2 different mode `int8` and `fp8`, I am trying to export these ONNX models to TRT, but no luck so far. It returns Error No sup…
-
python3 detectnet.py --model=peoplenet pedestrians.mp4 pedestrians_peoplenet.mp4
[gstreamer] initialized gstreamer, version 1.14.5.0
[gstreamer] gstDecoder -- creating decoder for pedestrians.mp4
O…