Open allegorywrite opened 1 month ago
This problem has been solved.
The folder for the provided CNN models contains engine caches such as TensorrtExecutionProvider_TRTKernel_graph_tf2onnx_406512749031481965_1_0_fp16_int8, but these cache files contain data that is strictly dependent on the hardware used, so loading them caused an error.
To address this problem, you need to delete all engine files under models and rewrite the options in onnx_generic.h as follows to generate new cache files:
tensorrt_options.trt_engine_cache_enable = 1;
Hi, I tried to speed up CNN inference using tensorRT, but the following two problems occurred. Have these issues been addressed?
SuperGlue stuck with warning:
NetVLAD issues a warning and gets stuck when quantizing to int8: