erew123 / alltalk_tts

AllTalk is based on the Coqui TTS engine, similar to the Coqui_tts extension for Text generation webUI, however supports a variety of advanced features, such as a settings page, low VRAM support, DeepSpeed, narrator, model finetuning, custom models, wav file maintenance. It can also be used with 3rd Party software via JSON calls.
GNU Affero General Public License v3.0
1.13k stars 115 forks source link

Deepspeed installed but not showing up as installed when running as an extension for Ooba #405

Closed spike4379 closed 6 days ago

spike4379 commented 6 days ago

Hey so I downloaded the latest version of alltalk 2, extracted to the extensions folder and renamed to just alltalkTTS, I ran the atsetup and installed as a separate standalone and that runs fine from what I can tell, no errors at least. However if I run the alltalk extension it has none of the requirements installed because the textgen option in alltalk doesnt actually activate oobabooga's env and install into that, it just comes up with pip is not a recognized command after downloading miniconda.

Anyway activated the env and installed the textgen requirements you had, and I manually isntalled the deepspeed-0.14.0+ce78a63-cp311-cp311-win_amd64.whl. Alltalk runs fine but it says "deepspeed: Not detected" whereas on the old version of alltalk that was only xtts, it worked fine but now it doesnt work sadly. What should I do? I tried running both the standalone and extension on its own but the extension throws out errors about the port being used because the standalone is running on it. I'm not sure how to run the extension through ooba AND the standalone because the extension wants to load its model into the vram and so does the standalone.

I have clean installs

Edit: this is the initialization of alltalk under ooba

[AllTalk TTS] Github updated : 11th November 2024 at 14:19 Branch: alltalkbeta [AllTalk ENG] Transcoding : ffmpeg found F:\ChatGPT\text-generation-webui-1.16\installer_files\env\Lib\site-packages\deepspeed\runtime\zero\linear.py:47: FutureWarning: torch.cuda.amp.custom_fwd(args...) is deprecated. Please use torch.amp.custom_fwd(args..., device_type='cuda') instead. @autocast_custom_fwd F:\ChatGPT\text-generation-webui-1.16\installer_files\env\Lib\site-packages\deepspeed\runtime\zero\linear.py:66: FutureWarning: torch.cuda.amp.custom_bwd(args...) is deprecated. Please use torch.amp.custom_bwd(args..., device_type='cuda') instead. @autocast_custom_bwd [AllTalk ENG] DeepSpeed version : Not available [AllTalk ENG] Python Version : 3.11.10 [AllTalk ENG] PyTorch Version : 2.4.1+cu121 [AllTalk ENG] CUDA Version : 12.1

When I try to generate an output I get this error in ooba.

Llama.generate: 1482 prefix-match hit, remaining 1 prompt tokens to eval Output generated in 5.19 seconds (55.66 tokens/s, 289 tokens, context 1488, seed 691819242) [AllTalk API] Error with API request: output_file_name: output_file_name needs to be the name without any special characters or file extension, e.g., 'filename'. [AllTalk TTS] Warning Error occurred during the API request: Status code: 400 Client Error: Bad Request for url: http://127.0.0.1:7851/api/tts-generate [AllTalk Server] Warning Audio generation failed. Status code: Error occurred during the API request Traceback (most recent call last): File "F:\ChatGPT\text-generation-webui-1.16\installer_files\env\Lib\site-packages\gradio\queueing.py", line 566, in process_events response = await route_utils.call_process_api( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ChatGPT\text-generation-webui-1.16\installer_files\env\Lib\site-packages\gradio\route_utils.py", line 261, in call_process_api output = await app.get_blocks().process_api( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ChatGPT\text-generation-webui-1.16\installer_files\env\Lib\site-packages\gradio\blocks.py", line 1786, in process_api result = await self.call_function( ^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ChatGPT\text-generation-webui-1.16\installer_files\env\Lib\site-packages\gradio\blocks.py", line 1350, in call_function prediction = await utils.async_iteration(iterator) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ChatGPT\text-generation-webui-1.16\installer_files\env\Lib\site-packages\gradio\utils.py", line 583, in async_iteration return await iterator.anext() ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ChatGPT\text-generation-webui-1.16\installer_files\env\Lib\site-packages\gradio\utils.py", line 576, in anext return await anyio.to_thread.run_sync( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ChatGPT\text-generation-webui-1.16\installer_files\env\Lib\site-packages\anyio\to_thread.py", line 56, in run_sync return await get_async_backend().run_sync_in_worker_thread( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ChatGPT\text-generation-webui-1.16\installer_files\env\Lib\site-packages\anyio_backends_asyncio.py", line 2441, in run_sync_in_worker_thread return await future ^^^^^^^^^^^^ File "F:\ChatGPT\text-generation-webui-1.16\installer_files\env\Lib\site-packages\anyio_backends_asyncio.py", line 943, in run result = context.run(func, *args) ^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ChatGPT\text-generation-webui-1.16\installer_files\env\Lib\site-packages\gradio\utils.py", line 559, in run_sync_iterator_async return next(iterator) ^^^^^^^^^^^^^^ File "F:\ChatGPT\text-generation-webui-1.16\installer_files\env\Lib\site-packages\gradio\utils.py", line 742, in gen_wrapper response = next(iterator) ^^^^^^^^^^^^^^ File "F:\ChatGPT\text-generation-webui-1.16\modules\chat.py", line 437, in generate_chat_reply_wrapper yield chat_html_wrapper(history, state['name1'], state['name2'], state['mode'], state['chat_style'], state['character_menu']), history ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ChatGPT\text-generation-webui-1.16\modules\html_generator.py", line 326, in chat_html_wrapper return generate_cai_chat_html(history['visible'], name1, name2, style, character, reset_cache) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ChatGPT\text-generation-webui-1.16\modules\html_generator.py", line 250, in generate_cai_chat_html row = [convert_to_markdown_wrapped(entry, use_cache=i != len(history) - 1) for entry in _row] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ChatGPT\text-generation-webui-1.16\modules\html_generator.py", line 250, in row = [convert_to_markdown_wrapped(entry, use_cache=i != len(history) - 1) for entry in _row] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ChatGPT\text-generation-webui-1.16\modules\html_generator.py", line 172, in convert_to_markdown_wrapped return convert_to_markdown.wrapped(string) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ChatGPT\text-generation-webui-1.16\modules\html_generator.py", line 78, in convert_to_markdown string = re.sub(pattern, replacement, string, flags=re.MULTILINE) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ChatGPT\text-generation-webui-1.16\installer_files\env\Lib\re__init__.py", line 185, in sub return _compile(pattern, flags).sub(repl, string, count) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ TypeError: expected string or bytes-like object, got 'NoneType'

I have my mode to the default oobabooga mode which is chat-instruct, it doesnt matter if I change it, it still gives me this error, it seems to be tied to the characters name. I tried a character named "Emily AI" and "DR. evil" and it errors out, if I use any character that uses a name with a space in the name it instantly happens.

So my question is how do I fix this so characters with spaces or other special characters in their names work, and how can I get deepspeed working again because it has been installed in the ooba environment manually yet alltalk refuses to see it, it does see FFMPEG fine.

Thanks for your help :)

I reisntalled AGAIN and the problem is still the same. it seems to be characters with spaces or any kind of 'special character' in their name. As for the deepspeed problem, I went into the engine settings and tried enabling it there once the extension had loaded but alas it still reports that deepspeed is disabled when it generates *and it takes a long time

erew123 commented 6 days ago

Hi @spike4379

Maybe you missed the message that says not to install AllTalk directly into Text-generation-webui currently?

Currently, you should perform a Standalone Installation in a folder that is NOT within the text-generation-webui folder and install alltalk as a standalone installation, Here are the instructions for doing that and here is a video showing how to do that

Once you have installed AllTalk as a Standalone Installation, which will automatically install DeepSpeed into its Python Environment, you can then go ahead and install the Text-generation-webui remote extension so that Text-generation-webui can talk with the AllTalk Standalone Installation. Instructions on doing this are here

If you want to understand about different Python environments and AllTalk's standalone VS Text-generation-webui's Python environment, there is an explainer here

Thanks

spike4379 commented 5 days ago

Sweet I'll give that a go! I missed that somehow thanks.