Tencent / ncnn

ncnn is a high-performance neural network inference framework optimized for the mobile platform
Other
20.57k stars 4.18k forks source link

ncnn::create_gpu_instance() 析构时发生崩溃 #5291

Closed niuxiaozhang closed 10 months ago

niuxiaozhang commented 10 months ago

error log | 日志或报错信息 | ログ

bug

context | 编译/运行环境 | バックグラウンド

windows10 + ncnn-20240102-windows-vs2019

how to reproduce | 复现步骤 | 再現方法

//yolox.cpp Yolox::Yolox(const char* model_path, bool useGPU) { m_detectorNet = new ncnn::Net(); m_detectorNet->register_custom_layer("YoloV5Focus", YoloV5Focus_layer_creator);

m_detectorNet->opt.use_vulkan_compute = useGPU;  // gpu
m_detectorNet->opt.use_fp16_arithmetic = true;  // fp16运算加速

char parampath[256];
char binpath[256];
// yolox-tiny yolox-nano
sprintf_s(parampath, "%s\\%s.param", model_path, "yolox-nano");
sprintf_s(binpath, "%s\\%s.bin", model_path, "yolox-nano");

int ret = m_detectorNet->load_param(parampath);
assert(ret == 0);
ret = m_detectorNet->load_model(binpath);
assert(ret == 0);

}

Yolox::~Yolox() { m_detectorNet->clear(); delete m_detectorNet; }

//rtmpose.cpp RtmPose::RtmPose(const char* model_path, bool useGPU) {

if NCNN_VULKAN

ncnn::create_gpu_instance();
bool hasGPU = ncnn::get_gpu_count() > 0;

endif

m_useGPU = hasGPU && useGPU;

char parampath[256];
char binpath[256];
m_poseNet = new ncnn::Net();
m_poseNet->opt.use_vulkan_compute = m_useGPU;  // gpu
m_poseNet->opt.use_fp16_arithmetic = true;  // fp16运算加速
sprintf_s(parampath, "%s\\%s.param", model_path, "rtmpose-tiny");
sprintf_s(binpath, "%s\\%s.bin", model_path, "rtmpose-tiny");
int ret = m_poseNet->load_param(parampath);
assert (ret == 0);
ret = m_poseNet->load_model(binpath);
assert(ret == 0);

m_ptrYolox = std::make_unique<Yolox>(model_path, m_useGPU);

}

RtmPose::~RtmPose() {

if NCNN_VULKAN

ncnn::destroy_gpu_instance();

endif

m_poseNet->clear();
delete m_poseNet;

} 我在两个class中load了两个模型,模型析构时发生上面截图的崩溃

niuxiaozhang commented 10 months ago

已解决 应该在全部 ncnn net 销毁后 ,再调用 ncnn::destroy_gpu_instance();