FunAudioLLM / SenseVoice

Multilingual Voice Understanding Model
https://funaudiollm.github.io/
Other
3.53k stars 318 forks source link

why GPU inference is more slowly #70

Closed aofengdaxia closed 4 months ago

aofengdaxia commented 4 months ago

I has inference some short sentences in my server. Why CPU is more performance the GPU?

I inferenced with onnx runtime. My GPU is 3090, CPU is inter 13900kf

LauraGPT commented 4 months ago

Please raise issue following the template and details your question.