Tencent / ncnn

ncnn is a high-performance neural network inference framework optimized for the mobile platform
Other
20.34k stars 4.16k forks source link

ncnn inference become much more slower than the start up time when using vulkan as backend #1946

Open leoluopy opened 4 years ago

leoluopy commented 4 years ago

i am running a centerface in my desktop and using ncnn GPU (vulkan as backend) for inference. i meet the following problem: as i first launched the program , inference costs about 6-8 ms , but after about 20 secs , inference costs about 20-40 ms and never goes back ? do any one meet this problem ? any suggestions ?

leoluopy commented 4 years ago

for some additional information , i have tried using ncnn::set_cpu_powersave(2) , the function returns -1 , it's not working.

mayaboker commented 4 years ago

Are you setting the net option to perform the inference on the vulkan gpu? option.use_vulkan_compute = true; From my experience at first the gpu inference is relatively slow, but after a few warmups the inference time decrease to a stable time.

leoluopy commented 4 years ago

Are you setting the net option to perform the inference on the vulkan gpu? option.use_vulkan_compute = true; From my experience at first the gpu inference is relatively slow, but after a few warmups the inference time decrease to a stable time.

yes, i have set the option : option.use_vulkan_compute = true;