erew123 / alltalk_tts

AllTalk is based on the Coqui TTS engine, similar to the Coqui_tts extension for Text generation webUI, however supports a variety of advanced features, such as a settings page, low VRAM support, DeepSpeed, narrator, model finetuning, custom models, wav file maintenance. It can also be used with 3rd Party software via JSON calls.
GNU Affero General Public License v3.0
686 stars 71 forks source link

TypeError: expected string or bytes-like object, got 'NoneType' #220

Closed guispfilho closed 2 months ago

guispfilho commented 2 months ago

diagnostics.log Describe the bug I had just installed text-generation-webui, and add AllTalk as an extension. It was working perfectly at the beginning, however, after 10-20 minutes I'm receiving the same error message when starting text-generation-webui, or trying to do anything within it

To Reproduce 1- Installed Python 3.9 2- Installed text-generation-webui using git clone https://github.com/oobabooga/text-generation-webui.git and running start_windows.bat 3- Ran text-generation-webui for the first time and closed it 4- Installed alltalk_tts into the extensions folder using git clone https://github.com/erew123/alltalk_tts, running atsetup.bat and selecting "BASE REQUIREMENTS: 1) Apply/Re-Apply the requirements for Text-generation-webui." 5- Ran text-generation-webui again and selected alltalk as a extension at the "session" tab. 6- Restarted text-generation-webui and everything was working fine. But as I was adding new .wav folders to the "voices" folder, text-generation-webui stopped working, returning the same error message after restarting it, or clicking anywhere. 7- Unchecking AllTalk checkbox in the "chat" tab fixes the issue.

**Text/logs**
02:01:18-967303 INFO     Starting Text generation web UI
02:01:18-968549 INFO     Loading settings from "settings.yaml"
02:01:18-971121 INFO     Loading the extension "alltalk_tts"
[AllTalk Startup]     _    _ _ _____     _ _       _____ _____ ____
[AllTalk Startup]    / \  | | |_   _|_ _| | | __  |_   _|_   _/ ___|
[AllTalk Startup]   / _ \ | | | | |/ _` | | |/ /    | |   | | \___ \
[AllTalk Startup]  / ___ \| | | | | (_| | |   <     | |   | |  ___) |
[AllTalk Startup] /_/   \_\_|_| |_|\__,_|_|_|\_\    |_|   |_| |____/
[AllTalk Startup]
[AllTalk Startup] Config file check      : No Updates required
[AllTalk Startup] AllTalk startup Mode   : Text-Gen-webui mode
[AllTalk Startup] WAV file deletion      : Disabled
[AllTalk Startup] DeepSpeed version      : Not Detected
[AllTalk Startup] Model is available     : Checking
[AllTalk Startup] Model is available     : Checked
[AllTalk Startup] Current Python Version : 3.11.9
[AllTalk Startup] Current PyTorch Version: 2.2.1+cu121
[AllTalk Startup] Current CUDA Version   : 12.1
[AllTalk Startup] Current TTS Version    : 0.22.0
[AllTalk Startup] Current TTS Version is : Up to date
[AllTalk Startup] AllTalk Github updated : 6th May 2024 at 22:40
[AllTalk Startup] TTS Subprocess         : Starting up
[AllTalk Startup]
[AllTalk Startup] AllTalk Settings & Documentation: http://127.0.0.1:7851
[AllTalk Startup]
[AllTalk Model] XTTSv2 Local Loading xttsv2_2.0.2 into cuda
[AllTalk Model] Coqui Public Model License
[AllTalk Model] https://coqui.ai/cpml.txt
[AllTalk Model] Model Loaded in 8.55 seconds.
[AllTalk Model] Ready
02:01:50-259193 INFO     Loading the extension "whisper_stt"
02:01:50-264097 INFO     Loading the extension "gallery"

Running on local URL:  http://127.0.0.1:7860

Traceback (most recent call last):
  File "C:\appli\apps\text-generation-webui\installer_files\env\Lib\site-packages\gradio\queueing.py", line 527, in process_events
    response = await route_utils.call_process_api(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\appli\apps\text-generation-webui\installer_files\env\Lib\site-packages\gradio\route_utils.py", line 261, in call_process_api
    output = await app.get_blocks().process_api(
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\appli\apps\text-generation-webui\installer_files\env\Lib\site-packages\gradio\blocks.py", line 1786, in process_api
    result = await self.call_function(
             ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\appli\apps\text-generation-webui\installer_files\env\Lib\site-packages\gradio\blocks.py", line 1338, in call_function
    prediction = await anyio.to_thread.run_sync(
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\appli\apps\text-generation-webui\installer_files\env\Lib\site-packages\anyio\to_thread.py", line 56, in run_sync
    return await get_async_backend().run_sync_in_worker_thread(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\appli\apps\text-generation-webui\installer_files\env\Lib\site-packages\anyio\_backends\_asyncio.py", line 2144, in run_sync_in_worker_thread
    return await future
           ^^^^^^^^^^^^
  File "C:\appli\apps\text-generation-webui\installer_files\env\Lib\site-packages\anyio\_backends\_asyncio.py", line 851, in run
    result = context.run(func, *args)
             ^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\appli\apps\text-generation-webui\installer_files\env\Lib\site-packages\gradio\utils.py", line 759, in wrapper
    response = f(*args, **kwargs)
               ^^^^^^^^^^^^^^^^^^
  File "C:\appli\apps\text-generation-webui\modules\chat.py", line 466, in redraw_html
    return chat_html_wrapper(history, name1, name2, mode, style, character, reset_cache=reset_cache)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\appli\apps\text-generation-webui\modules\html_generator.py", line 271, in chat_html_wrapper
    return generate_cai_chat_html(history['visible'], name1, name2, style, character, reset_cache)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\appli\apps\text-generation-webui\modules\html_generator.py", line 195, in generate_cai_chat_html
    row = [convert_to_markdown_wrapped(entry, use_cache=i != len(history) - 1) for entry in _row]
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\appli\apps\text-generation-webui\modules\html_generator.py", line 195, in <listcomp>
    row = [convert_to_markdown_wrapped(entry, use_cache=i != len(history) - 1) for entry in _row]
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\appli\apps\text-generation-webui\modules\html_generator.py", line 118, in convert_to_markdown_wrapped
    return convert_to_markdown.__wrapped__(string)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\appli\apps\text-generation-webui\modules\html_generator.py", line 53, in convert_to_markdown
    string = re.sub(r'(^|[\n])&gt;', r'\1>', string)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\appli\apps\text-generation-webui\installer_files\env\Lib\re\__init__.py", line 185, in sub
    return _compile(pattern, flags).sub(repl, string, count)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: expected string or bytes-like object, got 'NoneType'

Desktop (please complete the following information): AllTalk was updated: 12/05/2024 (installed today) Custom Python environment: no Text-generation-webUI was updated: 12/05/2024 (installed today)

Additional context

Maybe the error have something to do with the TTS Method, I don't know if it's only showing up after switching out of XTTSv2 Local.

Diagnostics file added.

System: RTX 4070 - MSI AMD 7700X 32GB DDR5 Windows 11

erew123 commented 2 months ago

Hi @guispfilho

The error that is occurring is happening before AllTalk is active in the script.

File "C:\appli\apps\text-generation-webui\modules\html_generator.py", line 53, in convert_to_markdown string = re.sub(r'(^|[\n])>', r'\1>', string) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\appli\apps\text-generation-webui\installer_files\env\Lib\re__init__.py", line 185, in sub return _compile(pattern, flags).sub(repl, string, count)

The file html_generator.py is a part of Text-gen-webui and not AllTalk. It runs various filtering behaviours on the text that goes into and out of the LLM. It will both pre process LLM text before it sends it onto any Text-gen-webui extensions and it will also process the return from any text-gen-webui extensions, for the purposes to ensure that the text can still be displayed post processing by the extensions.

Here you can see I have added a print statement to the html_generator.py, stated text-gen-webui without AllTalk loaded and its running through that print statement multiple times just on starting up Text-gen-webui.

image

I believe this is because it processes whatever is in the current chat window and it checks it on start-up e.g.

This conversation being in the chat window generates 6x messages when I start Text-gen-webui

image

This conversation only generates 2x messages on text-gen-webui startup

image

As such, I think you may have something strange or corrupt within your chat window OR the LLM you are using is sending something very odd through the text-gen-webui pre-processors. I know that you have to make certain setup changes when using a Llama 3 model, though Im not exactly sure what, but that could be related. Ultimately, this issue is somewhere within text-gen-webui's code (I believe) and It will call on certain functions in pretty much any extension that can interact with the LLM's generated text, so it wouldnt be unique to AllTalk.

So I have a few suggestions from here:

1) Do you start Text-gen-webui with startwindows.bat? Please make sure you do as it loads the text-gen-webui built Python environment, You mention installing Python 3.9. You shouldnt have to install Python yourself manually as text-gen-webui builds its own Python environment and you should always start text-gen-webui with the start{youros}.xxx file. Unless you have a very specific reason to run a custom python environment (which can cause issues), you should always run the file I mentioned (also mentioned here) https://github.com/oobabooga/text-generation-webui?tab=readme-ov-file#how-to-install

2) Please clear your chat window in text-gen-webui (New chat) and see if that loads up cleaner afterwards.

3) Re: But as I was adding new .wav folders to the "voices" folder there is no code within AllTalk that will actively monitor your voices folder. The only time the list of whats in the voices folder is accessed is when either AllTalk starts up, or when you manually click the refresh button in the AllTalk extension. So if your text-gen-webui froze at that, it suggests something else going on with your system.

image

I dont know how much you do or dont know about Python environments, but I wrote a basic primer here https://github.com/erew123/alltalk_tts?tab=readme-ov-file#installation-and-setup-issues and that covers off a few things with Text-gen-webui.

In principle though, I cannot see anything wrong showing from the diagnostics file/setup, though, that is as long as you are starting text-gen-webui with the start_windows.bat file. I built a fresh install of text-gen-webui myself 2x days ago, so I would be on a similar build to yourself (version wise) as text-gen-webui has had no commits for 4x days.

image

I have been using AllTalk and text-gen-webui and not personally experienced an issue, but that leads me back to the possibility of maybe the LLM you may be using and what its sending into the chat window.

If you go through the above and have no success, I would suggest that maybe you look at the text-gen-webui discussion/issues board. Beyond that you could send me an JSON chat file from your text-gen-webui and I can try see if there is something funny in the chat history that could cause the fault, though again, this is more than likely to fall back to being a text-gen-webui thing to look into.

Chat logs are kept in text-generation-webui\logs\chat\ (probably the Assistant folder if you arent using a character).

Just for reference, the actual fault/error message is saying this bit of code:

  File "C:\appli\apps\text-generation-webui\modules\html_generator.py", line 53, in convert_to_markdown
    string = re.sub(r'(^|[\n])&gt;', r'\1>', string)

in your error, has a value of None, meaning that there is no text-string from the LLM to process.

Please get back to me if you need to.

Thanks