-
(First, it has to be added to Thermodynamics.jl)
-
## Description
## Environment
**TensorRT Version**: 8.5
**NVIDIA GPU**: Jetson Orin Nano developer kit 8gb
**NVIDIA Driver Version**:
**CUDA Version**:11.4
**CUDNN Version…
-
# PTQ | Download video bằng tool ffmpeg trên Windows
Đây là 1 bài chia sẻ vui! Dành cho bạn nào chưa biết thì ffmpeg là cái tool xịn xò dùng để tải video ở những nơi…không thể tải bằng cách thông thư…
-
## Description
I want to finetune a quantized yolo model, and export to TRT.
I carefully read the QDQ document and some existed issues to place and remove unused QDQ nodes, the model have 92% int8…
-
Post-training quantization (PTQ) - without finetune and Quantization aware training (QAT) works fine but
get error in Post-training quantization (PTQ) - fast finetune:
activation = layer.layer.acti…
-
**Describe the bug**
I used docker to run onnxruntime transformers optimizer and met this error, but I can successfully run it on my local ubuntu machine. Could you give any suggestion?
![image](htt…
-
### bug描述 Describe the Bug
在 PaddleSlim PTQ量化后导出的模型在进行 Paddle Inference 的 int8 推理的时候会报如下所示的错误:
![image](https://github.com/PaddlePaddle/Paddle/assets/69797242/80b898ae-ef6e-4226-8412-8cc1dfff8e37)
…
-
Thank you for the amazing work. I was able to setup the BEVFusion inference using the model files given in the readme.
I want to use this pipeline for BEVFusion trained on my dataset, so as per the […
-
I'm having issues to verify that a simulated quantized onnx file offers decent performance
Issue: After doing PTQ. I cannot use the quantized model in onnx-runtime! (preferably GPU)
-
我运行后的结果都是在模型中加入了QDQ量化节点,并不是真正的int8量化,请问是我运行方式有问题还是程序本身就是这么设计的呢