NVIDIA / TensorRT-Model-Optimizer

TensorRT Model Optimizer is a unified library of state-of-the-art model optimization techniques such as quantization, pruning, distillation, etc. It compresses deep learning models for downstream deployment frameworks like TensorRT-LLM or TensorRT to optimize inference speed on NVIDIA GPUs.
https://nvidia.github.io/TensorRT-Model-Optimizer
Other
581 stars 44 forks source link

ValueError: Runtime TRT is not supported. #49

Open hawl666 opened 3 months ago

hawl666 commented 3 months ago

onnx_ptq/evaluate_vit.py error: ValueError: Runtime TRT is not supported. 企业微信截图_17225064037714

riyadshairi979 commented 3 months ago

That script evaluates the input ONNX model after compiling to TensorRT engine. Looks like your system does not have TensorRT installed. Try using the docker as mentioned here.