FeiYull / TensorRT-Alpha

🔥🔥🔥TensorRT for YOLOv8、YOLOv8-Pose、YOLOv8-Seg、YOLOv8-Cls、YOLOv7、YOLOv6、YOLOv5、YOLONAS......🚀🚀🚀CUDA IS ALL YOU NEED.🍎🍎🍎
GNU General Public License v2.0
1.28k stars 198 forks source link

不使用trtexec ,能否把onnx文件 转为trt文件,进行tensorRT C++ 推理呢? #35

Closed Robintjhb closed 12 months ago

Robintjhb commented 1 year ago

不使用trtexec ,能否把onnx文件 转为trt文件,进行tensorRT C++ 推理呢?有些情况是,不能使用trtexec的,通过pip 安装的tensorRT 环境没有trtexec~

FeiYull commented 1 year ago

@Robintjhb ref:https://github.com/NVIDIA/TensorRT/blob/release/8.6/samples/trtexec/trtexec.cpp

xiaohaoo commented 1 year ago

@Robintjhb 使用Tensorrt读取ONNX模型

ifstream tensorrt_file("path/to/model.trt", ios::binary);
shared_ptr<ICudaEngine> engine = nullptr;
shared_ptr<IExecutionContext> engine_context = nullptr;
tensorrt_file.seekg(0, ios::end);
auto file_size = tensorrt_file.tellg();
tensorrt_file.seekg(0, ios::beg);
vector<char> buffer(file_size);
tensorrt_file.read(buffer.data(), file_size);
tensorrt_file.close();
shared_ptr<IRuntime> runtime(createInferRuntime(logger));
engine.reset(runtime->deserializeCudaEngine(buffer.data(), file_size));