-
* face_recognition version: 1.2.3
* Python version: 3.6.8
* Operating System: Ubuntu 18.04(Jetson Nano Optimized)
Hello Adam sir! First of all, let me tell you that face_recognition is an extreme…
-
I tried to run the example shown on the homepage:
```
import torch
from torch2trt import torch2trt
from torchvision.models.alexnet import alexnet
device = torch.device("cpu")
# create some…
-
Following the recent change to Leela zero licensing (https://github.com/gcp/leela-zero/commit/808bb43df34654d357be2dd278eba19c03f07094) we can similarly update the license in the opencl, blas and shar…
-
你好,我在使用TensorRt输出结果时发现图片上的标注很密集,我去测试了test.py 可以得到正确结果,使用TensorRT时coco.names文件也修改了,可以帮我看看是什么原因吗,谢谢~
-
## 环境
- 【编译命令】cmake .. -DBUILD_ON_JETSON=ON \
-DENABLE_VISION=ON \
-DENABLE_PADDLE_BACKEND=OFF\
-DPADDLEINFERENCE_DIRECTORY=/Download/paddle_inference_jetson \
…
-
### Describe the issue
I try to run an Olive converted UNet model using TensorrtExecutionProvider, but keep throwing this error.
```python
2023-06-04 01:08:26.8714279 [E:onnxruntime:Default, tensor…
-
Hi @enazoe, I'm currently using yolov5 trying different batch sizes.
I'm having large inference time, and also pre processing and nms are really slow.
I've tested with i7 8th gen, NVIDIA GTX 2080Ti…
-
**Describe the bug**
I want to deploy the trt engine with triton-inference-server, but it can't load the trt model.
**To Reproduce**
I've converted the trt engine file from mmdet model with doc…
-
qwen# python convert_checkpoint.py --model_dir /code/tensorrt-llm/Qwen1.5-32B-Chat/ --output_dir ./trt_ckpt/qwen1.5-32b/fp16 --dtype float16 --tp_size 4
[TensorRT-LLM] TensorRT-LLM version: 0.11.0.de…
-
Hello, thank you again for the hard work to push deployment of 3D detection models.
I made several tests on Pointpillars arch, trained on Kitti dataset (reduced) and compared the computing time and…