FunAudioLLM / SenseVoice

Multilingual Voice Understanding Model
https://funaudiollm.github.io/
Other
2.61k stars 249 forks source link

why GPU inference is more slowly #70

Closed aofengdaxia closed 1 month ago

aofengdaxia commented 1 month ago

I has inference some short sentences in my server. Why CPU is more performance the GPU?

I inferenced with onnx runtime. My GPU is 3090, CPU is inter 13900kf

LauraGPT commented 1 month ago

Please raise issue following the template and details your question.