matatonic / openedai-speech

An OpenAI API compatible text to speech server using Coqui AI's xtts_v2 and/or piper tts as the backend.
GNU Affero General Public License v3.0
193 stars 32 forks source link

CUDNN_STATUS_NOT_SUPPORTED #9

Closed Qualzz closed 1 month ago

Qualzz commented 2 months ago

Everything seems to be fine. Integration with OpenWebUI is working great ! I can see cuda usage when generating tts.

But there is this CUDNN warning on the logs:

server-1  |  > Text splitted to sentences.
server-1  | ['I can help you with that!']
server-1  |  > Processing time: 1.2529590129852295
server-1  |  > Real-time factor: 0.5532631014964016
server-1  | INFO:     172.18.0.1:44926 - "POST /v1/audio/speech HTTP/1.1" 200 OK
server-1  | /usr/local/lib/python3.11/site-packages/torch/nn/modules/conv.py:306: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.)
server-1  |   return F.conv1d(input, weight, bias, self.stride,
server-1  |  > Text splitted to sentences.

Should this be ignored ?

matatonic commented 2 months ago

I'm seeing these as well, so far they seem to be cosmetic and don't impact quality or performance, but I will leave this issue open until it's resolved.