oobabooga / text-generation-webui

A Gradio web UI for Large Language Models.
GNU Affero General Public License v3.0
40.46k stars 5.3k forks source link

whisper unknown error #5982

Open kalle07 opened 6 months ago

kalle07 commented 6 months ago

Describe the bug

only needs a lot time

Is there an existing issue for this?

Reproduction

every model with whisper small medium ....

Screenshot

No response

Logs

ERROR:    Exception in ASGI application
Traceback (most recent call last):
  File "e:\text-generation-webui\installer_files\env\Lib\site-packages\uvicorn\protocols\http\h11_impl.py", line 407, in run_asgi
    result = await app(  # type: ignore[func-returns-value]
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "e:\text-generation-webui\installer_files\env\Lib\site-packages\uvicorn\middleware\proxy_headers.py", line 69, in __call__
    return await self.app(scope, receive, send)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "e:\text-generation-webui\installer_files\env\Lib\site-packages\fastapi\applications.py", line 1054, in __call__
    await super().__call__(scope, receive, send)
  File "e:\text-generation-webui\installer_files\env\Lib\site-packages\starlette\applications.py", line 123, in __call__    await self.middleware_stack(scope, receive, send)
  File "e:\text-generation-webui\installer_files\env\Lib\site-packages\starlette\middleware\errors.py", line 186, in __call__
    raise exc
  File "e:\text-generation-webui\installer_files\env\Lib\site-packages\starlette\middleware\errors.py", line 164, in __call__
    await self.app(scope, receive, _send)
  File "e:\text-generation-webui\installer_files\env\Lib\site-packages\gradio\route_utils.py", line 689, in __call__
    await self.app(scope, receive, send)
  File "e:\text-generation-webui\installer_files\env\Lib\site-packages\starlette\middleware\exceptions.py", line 62, in __call__
    await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
  File "e:\text-generation-webui\installer_files\env\Lib\site-packages\starlette\_exception_handler.py", line 64, in wrapped_app
    raise exc
  File "e:\text-generation-webui\installer_files\env\Lib\site-packages\starlette\_exception_handler.py", line 53, in wrapped_app
    await app(scope, receive, sender)
  File "e:\text-generation-webui\installer_files\env\Lib\site-packages\starlette\routing.py", line 758, in __call__
    await self.middleware_stack(scope, receive, send)
  File "e:\text-generation-webui\installer_files\env\Lib\site-packages\starlette\routing.py", line 778, in app
    await route.handle(scope, receive, send)
  File "e:\text-generation-webui\installer_files\env\Lib\site-packages\starlette\routing.py", line 299, in handle
    await self.app(scope, receive, send)
  File "e:\text-generation-webui\installer_files\env\Lib\site-packages\starlette\routing.py", line 79, in app
    await wrap_app_handling_exceptions(app, request)(scope, receive, send)
  File "e:\text-generation-webui\installer_files\env\Lib\site-packages\starlette\_exception_handler.py", line 64, in wrapped_app
    raise exc
  File "e:\text-generation-webui\installer_files\env\Lib\site-packages\starlette\_exception_handler.py", line 53, in wrapped_app
    await app(scope, receive, sender)
  File "e:\text-generation-webui\installer_files\env\Lib\site-packages\starlette\routing.py", line 77, in app
    await response(scope, receive, send)
  File "e:\text-generation-webui\installer_files\env\Lib\site-packages\starlette\responses.py", line 351, in __call__
    await send(
  File "e:\text-generation-webui\installer_files\env\Lib\site-packages\starlette\_exception_handler.py", line 50, in sender
    await send(message)
  File "e:\text-generation-webui\installer_files\env\Lib\site-packages\starlette\_exception_handler.py", line 50, in sender
    await send(message)
  File "e:\text-generation-webui\installer_files\env\Lib\site-packages\starlette\middleware\errors.py", line 161, in _send
    await send(message)
  File "e:\text-generation-webui\installer_files\env\Lib\site-packages\uvicorn\protocols\http\h11_impl.py", line 511, in send
    output = self.conn.send(event=h11.EndOfMessage())
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "e:\text-generation-webui\installer_files\env\Lib\site-packages\h11\_connection.py", line 512, in send
    data_list = self.send_with_data_passthrough(event)
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "e:\text-generation-webui\installer_files\env\Lib\site-packages\h11\_connection.py", line 545, in send_with_data_passthrough
    writer(event, data_list.append)
  File "e:\text-generation-webui\installer_files\env\Lib\site-packages\h11\_writers.py", line 67, in __call__
    self.send_eom(event.headers, write)
  File "e:\text-generation-webui\installer_files\env\Lib\site-packages\h11\_writers.py", line 96, in send_eom
    raise LocalProtocolError("Too little data for declared Content-Length")
h11._util.LocalProtocolError: Too little data for declared Content-Length
Output generated in 1.16 seconds (30.17 tokens/s, 35 tokens, context 423, seed 1195739433)

System Info

win10
rtx4060
Quidam2k commented 5 months ago

I've seen that one, and now I'm getting the following. Running on Win10 & a RTX 4090.

Traceback (most recent call last): File "B:\ai_art\text-generation-webui-main\installer_files\env\Lib\site-packages\gradio\queueing.py", line 527, in process_events response = await route_utils.call_process_api( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "B:\ai_art\text-generation-webui-main\installer_files\env\Lib\site-packages\gradio\route_utils.py", line 261, in call_process_api output = await app.get_blocks().process_api( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "B:\ai_art\text-generation-webui-main\installer_files\env\Lib\site-packages\gradio\blocks.py", line 1786, in process_api result = await self.call_function( ^^^^^^^^^^^^^^^^^^^^^^^^^ File "B:\ai_art\text-generation-webui-main\installer_files\env\Lib\site-packages\gradio\blocks.py", line 1338, in call_function prediction = await anyio.to_thread.run_sync( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "B:\ai_art\text-generation-webui-main\installer_files\env\Lib\site-packages\anyio\to_thread.py", line 56, in run_sync return await get_async_backend().run_sync_in_worker_thread( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "B:\ai_art\text-generation-webui-main\installer_files\env\Lib\site-packages\anyio_backends_asyncio.py", line 2144, in run_sync_in_worker_thread return await future ^^^^^^^^^^^^ File "B:\ai_art\text-generation-webui-main\installer_files\env\Lib\site-packages\anyio_backends_asyncio.py", line 851, in run result = context.run(func, args) ^^^^^^^^^^^^^^^^^^^^^^^^ File "B:\ai_art\text-generation-webui-main\installer_files\env\Lib\site-packages\gradio\utils.py", line 759, in wrapper response = f(args, **kwargs) ^^^^^^^^^^^^^^^^^^ File "B:\ai_art\text-generation-webui-main\extensions\whisper_stt\script.py", line 48, in auto_transcribe transcription = do_stt(audio, whipser_model, whipser_language) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "B:\ai_art\text-generation-webui-main\extensions\whisper_stt\script.py", line 36, in do_stt transcription = r.recognize_whisper(audio_data, language=whipser_language, model=whipser_model) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "B:\ai_art\text-generation-webui-main\installer_files\env\Lib\site-packages\speech_recognition__init__.py", line 1486, in recognize_whisper wav_bytes = audio_data.get_wav_data(convert_rate=16000) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "B:\ai_art\text-generation-webui-main\installer_files\env\Lib\site-packages\speech_recognition\audio.py", line 146, in get_wav_data raw_data = self.get_raw_data(convert_rate, convert_width) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "B:\ai_art\text-generation-webui-main\installer_files\env\Lib\site-packages\speech_recognition\audio.py", line 91, in get_raw_data rawdata, = audioop.ratecv( ^^^^^^^^^^^^^^^ audioop.error: not a whole number of frames

seruko11 commented 5 months ago

I'm seeing the same set of issues as above Win11 & a RTX 4090
Whisper TTS simply has not worked since the gradio update

angelochu7 commented 5 months ago

@seruko11, does it mean i can downgrade the gradio to make whisper work? Thank you.

seruko11 commented 4 months ago

@seruko11, does it mean i can downgrade the gradio to make whisper work? Thank you.

Probably, would you be able to post a walkthrough of that? I'd appreciate it