Closed gavtography closed 1 year ago
Thank you for the detailed report. If tortoise fails to install it's quite troublesome. It's set up as a core dependency so there are more things that were supposed to be installed.
To get "manual" access use cmd_windows. It will activate conda so you can do pip install and conda install.
I could make tortoise an optional dependency, but I still want to fix it, as it's one of the main features.
For now, if you don't care for tortoise, you it's possible to just manually remove it by commenting it out from server.py.
As for gradio missing - yes, because your global environment isn't used, everything will show up as not installed unless you activate the conda environment (which has a virtual environment with all the dependencies installed.)
On Thu, Jul 20, 2023, 6:11 AM gavtography @.***> wrote:
Trying this on an Nvidia GPU, the 1650 Super to be exact. The entire installation process seemed to have gone fine, I selected Nvidia when asked. No issues.
However, at the end I got this error:
Env file not found. Creating default env. Config file not found. Creating default config. Traceback (most recent call last): File "C:\Users\USER\Desktop\one-click-installers-tts-6.0\one-click-installers-tts-6.0\tts-generation-webui\server.py", line 40, in
from src.tortoise.generation_tab_tortoise import generation_tab_tortoise File "C:\Users\USER\Desktop\one-click-installers-tts-6.0\one-click-installers-tts-6.0\tts-generation-webui\src\tortoise\generation_tab_tortoise.py", line 7, in from src.tortoise.gen_tortoise import ( File "C:\Users\USER\Desktop\one-click-installers-tts-6.0\one-click-installers-tts-6.0\tts-generation-webui\src\tortoise\gen_tortoise.py", line 7, in from tortoise.api import TextToSpeech, MODELS_DIR ModuleNotFoundError: No module named 'tortoise' Done!
Then it said press any key to continue. The instructions.txt isn't really clear on how to run the actual webui. So I tried running the webui.py and it just says Conda is not installed. Exiting... in the terminal and doesn't run.
I then tried to run the server.py just in case, and for some reason it said I didn't have gradio installed. So I went ahead and installed gradio just by doing pip install gradio. This let the script run, but similar to the one click installer terminal error, it can't find tortoise. I'm also not sure if that's the only issue, or if the one-click installer wanted to continue doing things after that, but stopped when that failed.
— Reply to this email directly, view it on GitHub https://github.com/rsxdalv/tts-generation-webui/issues/83, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABTRXI43IV2EI7DVUXWMUOLXRCOVZANCNFSM6AAAAAA2QY3F4Q . You are receiving this because you are subscribed to this thread.Message ID: @.***>
Thank you for the detailed report. If tortoise fails to install it's quite troublesome. It's set up as a core dependency so there are more things that were supposed to be installed. To get "manual" access use cmd_windows. It will activate conda so you can do pip install and conda install. I could make tortoise an optional dependency, but I still want to fix it, as it's one of the main features. For now, if you don't care for tortoise, you it's possible to just manually remove it by commenting it out from server.py. As for gradio missing - yes, because your global environment isn't used, everything will show up as not installed unless you activate the conda environment (which has a virtual environment with all the dependencies installed.) …
Thank you for the reply. Apologies if I'm misunderstanding something, I may need a clear explanation. So just for clarification, I should open cmd_windows and run the server.py from there? I received the same error.
I went ahead and commented out tortoise. Although I would like to use it at some point. I'm mainly interested in the bark and rvc implementation of this project for now. When I did that. I got this error in the server.py
(C:\Users\USER\Desktop\one-click-installers-tts-6.0\one-click-installers-tts-6.0\installer_files\env) C:\Users\USER\Desktop\one-click-installers-tts-6.0\one-click-installers-tts-6.0>python C:\Users\USER\Desktop\one-click-installers-tts-6.0\one-click-installers-tts-6.0\tts-generation-webui\server.py
Traceback (most recent call last):
File "C:\Users\USER\Desktop\one-click-installers-tts-6.0\one-click-installers-tts-6.0\tts-generation-webui\server.py", line 9, in <module>
from src.css.css import full_css
File "C:\Users\USER\Desktop\one-click-installers-tts-6.0\one-click-installers-tts-6.0\tts-generation-webui\src\css\css.py", line 12, in <module>
full_css += load_css("src/musicgen/musicgen.css")
File "C:\Users\USER\Desktop\one-click-installers-tts-6.0\one-click-installers-tts-6.0\tts-generation-webui\src\utils\load_css.py", line 2, in load_css
with open(filename, "r") as f:
FileNotFoundError: [Errno 2] No such file or directory: 'src/musicgen/musicgen.css'
I assumed this meant I could comment out the music gen in the server.py too, but that resulted in the same error, so I put it back.
I then tried to run the start_windows bat file which resulted in this:
Loading extensions:
Failed to import module: callback_save_generation_ffmpeg
Error: <class 'ModuleNotFoundError'> No module named 'bark'
Loaded extension: callback_save_generation_musicgen_ffmpeg
Loaded extension: empty_extension
Loaded 1 callback_save_generation extensions.
Loaded 1 callback_save_generation_musicgen extensions.
Traceback (most recent call last):
File "C:\Users\USER\Desktop\one-click-installers-tts-6.0\one-click-installers-tts-6.0\tts-generation-webui\server.py", line 42, in <module>
from src.bark.generation_tab_bark import generation_tab_bark
File "C:\Users\USER\Desktop\one-click-installers-tts-6.0\one-click-installers-tts-6.0\tts-generation-webui\src\bark\generation_tab_bark.py", line 15, in <module>
from src.bark.generate_and_save_metadata import generate_and_save_metadata
File "C:\Users\USER\Desktop\one-click-installers-tts-6.0\one-click-installers-tts-6.0\tts-generation-webui\src\bark\generate_and_save_metadata.py", line 4, in <module>
from bark.generation import models
ModuleNotFoundError: No module named 'bark'
Done!
Press any key to continue . . .
I assume this is part of what you were referring to other things not being installed because of the tortoise failure. If you're able to suggest any steps I should take, I would appreciate the help. Thank you again!
Thanks for testing it!
Yes it seems like it has installed almost nothing. Also the css file missing is something entirely new... Perhaps did it run out of space?
I'd really like to see the full installation log. To save time, we can risk it and only do it by deleting tts-generation-webui folder and running start_windows. Then hopefully I can see and fix the errors happening there.
The big alternative would be to try WSL but I can't guarantee that it would work. It's more of an option if Windows 11 just completely refuses to work.
On Thu, Jul 20, 2023, 8:40 AM gavtography @.***> wrote:
Thank you for the detailed report. If tortoise fails to install it's quite troublesome. It's set up as a core dependency so there are more things that were supposed to be installed. To get "manual" access use cmdwindows. It will activate conda so you can do pip install and conda install. I could make tortoise an optional dependency, but I still want to fix it, as it's one of the main features. For now, if you don't care for tortoise, you it's possible to just manually remove it by commenting it out from server.py. As for gradio missing - yes, because your global environment isn't used, everything will show up as not installed unless you activate the conda environment (which has a virtual environment with all the dependencies installed.) … <#m-6202859771313141673_>
Thank you for the reply. Apologies if I'm misunderstanding something, I may need a clear explanation. So just for clarification, I should open cmd_windows and run the server.py from there? I received the same error.
I went ahead and commented out tortoise. Although I would like to use it at some point. I'm mainly interested in the bark and rvc implementation of this project for now. When I did that. I got this error in the server.py
(C:\Users\USER\Desktop\one-click-installers-tts-6.0\one-click-installers-tts-6.0\installer_files\env) C:\Users\USER\Desktop\one-click-installers-tts-6.0\one-click-installers-tts-6.0>python C:\Users\USER\Desktop\one-click-installers-tts-6.0\one-click-installers-tts-6.0\tts-generation-webui\server.py Traceback (most recent call last): File "C:\Users\USER\Desktop\one-click-installers-tts-6.0\one-click-installers-tts-6.0\tts-generation-webui\server.py", line 9, in
from src.css.css import full_css File "C:\Users\USER\Desktop\one-click-installers-tts-6.0\one-click-installers-tts-6.0\tts-generation-webui\src\css\css.py", line 12, in full_css += load_css("src/musicgen/musicgen.css") File "C:\Users\USER\Desktop\one-click-installers-tts-6.0\one-click-installers-tts-6.0\tts-generation-webui\src\utils\load_css.py", line 2, in load_css with open(filename, "r") as f: FileNotFoundError: [Errno 2] No such file or directory: 'src/musicgen/musicgen.css' I assumed this meant I could comment out the music gen in the server.py too, but that resulted in the same error, so I put it back.
I then tried to run the start_windows bat file which resulted in this:
Loading extensions: Failed to import module: callback_save_generation_ffmpeg Error: <class 'ModuleNotFoundError'> No module named 'bark' Loaded extension: callback_save_generation_musicgen_ffmpeg Loaded extension: empty_extension Loaded 1 callback_save_generation extensions. Loaded 1 callback_save_generation_musicgen extensions. Traceback (most recent call last): File "C:\Users\USER\Desktop\one-click-installers-tts-6.0\one-click-installers-tts-6.0\tts-generation-webui\server.py", line 42, in
from src.bark.generation_tab_bark import generation_tab_bark File "C:\Users\USER\Desktop\one-click-installers-tts-6.0\one-click-installers-tts-6.0\tts-generation-webui\src\bark\generation_tab_bark.py", line 15, in from src.bark.generate_and_save_metadata import generate_and_save_metadata File "C:\Users\USER\Desktop\one-click-installers-tts-6.0\one-click-installers-tts-6.0\tts-generation-webui\src\bark\generate_and_save_metadata.py", line 4, in from bark.generation import models ModuleNotFoundError: No module named 'bark' Done! Press any key to continue . . .
I assume this is part of what you were referring to other things not being installed because of the tortoise failure. If you're able to suggest any steps I should take, I would appreciate the help. Thank you again!
— Reply to this email directly, view it on GitHub https://github.com/rsxdalv/tts-generation-webui/issues/83#issuecomment-1643258951, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABTRXI7IOSAXFIUOYKXD45LXRDADTANCNFSM6AAAAAA2QY3F4Q . You are receiving this because you commented.Message ID: @.***>
Also the css file missing is something entirely new... Perhaps did it run out of space? …
You know the first time I attempted an install, I got a low disk space warning on Windows, but I cleared my recycle bin mid-install which gave me more storage. I assumed it installed everything because it ended with me having 1GB on my drive.
I went ahead and deleted the folder, I also cleared a ton of storage. I have 25GB of free space on my main drive, which I assume is enough.
I believe it installed without errors, but I didn't see any red text so I might have missed something. Regardless, this is the error I get when trying to start the webui.py:
Traceback (most recent call last):
File "C:\Users\USER\Desktop\one-click-installers-tts-6.0\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\requests\compat.py", line 11, in <module>
import chardet
ModuleNotFoundError: No module named 'chardet'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\USER\Desktop\one-click-installers-tts-6.0\one-click-installers-tts-6.0\tts-generation-webui\server.py", line 4, in <module>
import gradio as gr
File "C:\Users\USER\Desktop\one-click-installers-tts-6.0\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\gradio\__init__.py", line 3, in <module>
import gradio.components as components
File "C:\Users\USER\Desktop\one-click-installers-tts-6.0\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\gradio\components\__init__.py", line 1, in <module>
from gradio.components.annotated_image import AnnotatedImage
File "C:\Users\USER\Desktop\one-click-installers-tts-6.0\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\gradio\components\annotated_image.py", line 9, in <module>
from gradio_client.documentation import document, set_documentation_group
File "C:\Users\USER\Desktop\one-click-installers-tts-6.0\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\gradio_client\__init__.py", line 1, in <module>
from gradio_client.client import Client
File "C:\Users\USER\Desktop\one-click-installers-tts-6.0\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\gradio_client\client.py", line 21, in <module>
import requests
File "C:\Users\USER\Desktop\one-click-installers-tts-6.0\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\requests\__init__.py", line 45, in <module>
from .exceptions import RequestsDependencyWarning
File "C:\Users\USER\Desktop\one-click-installers-tts-6.0\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\requests\exceptions.py", line 9, in <module>
from .compat import JSONDecodeError as CompatJSONDecodeError
File "C:\Users\USER\Desktop\one-click-installers-tts-6.0\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\requests\compat.py", line 13, in <module>
import charset_normalizer as chardet
File "C:\Users\USER\Desktop\one-click-installers-tts-6.0\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\charset_normalizer\__init__.py", line 23, in <module>
from charset_normalizer.api import from_fp, from_path, from_bytes, normalize
File "C:\Users\USER\Desktop\one-click-installers-tts-6.0\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\charset_normalizer\api.py", line 10, in <module>
from charset_normalizer.md import mess_ratio
File "charset_normalizer\md.py", line 5, in <module>
ImportError: cannot import name 'COMMON_SAFE_ASCII_CHARACTERS' from 'charset_normalizer.constant' (C:\Users\USER\Desktop\one-click-installers-tts-6.0\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\charset_normalizer\constant.py)
(C:\Users\USER\Desktop\one-click-installers-tts-6.0\one-click-installers-tts-6.0\installer_files\env) C:\Users\USER\Desktop\one-click-installers-tts-6.0\one-click-installers-tts-6.0>
If it helps, I can delete the folder again and give the entire installation log. Does this generate a physical log file or is there a way to do this within cmd itself? Not sure if my computer will let me copy and paste that much text into pastebin, but I can try.
Yeah I'm sorry to bother you so much. The way to make a full log would be to select all in the terminal and then copy it into a .txt file. Pastebin is not necessary, you can paste the full .txt here directly.
I tried searching about these errors but nothing comes up so I'm guessing it's something else that's the root cause. Also I added a google colab which can hopefully be useful.
Yeah I'm sorry to bother you so much. The way to make a full log would be to select all in the terminal and then copy it into a .txt file. Pastebin is not necessary, you can paste the full .txt here directly.
I promise you, don't worry about it whatsoever, we can go back and forth as much as needed. I'm used to projects not working the first time on my system, so no big deal at all haha, I look forward to any and all responses.
Before you responded, I tried installing on cpu only instead of nvidia, which gave the same error results. Interestingly though, after trying again with nvidia so I could give you the txt, this time it found conflicts in the beginning which took awhile to "resolve" I suppose. However, ended up with the same errors as my last reply.
Same PC, I was just censoring the name in previous replies, which wasn't super necessary.
Thank you again!
oh wow, there are so many errors, I'll need to take a look at it. My guess is that it's pretty much a complete installation failure, since Conda isn't supposed to fail, it's the constant factor that has been working.
Interesting, that's a shame. Take your time and I'd be happy to hear any ideas. I've always had bad luck on this computer with projects, a few months ago I spent at least 12 hours across 2 days within a GitHub issues post just to get gpt4all working on my system. So it happens, no worries.
This repository is exactly what I was looking for in my project, so I look forward to any updates. I appreciate your time. Feel free to reply whenever, how ever many times you want, I don't mind the emails.
Hmm, I found this: https://github.com/conda/conda/issues/12155
I am not ready at the moment to update the conda that comes with the installer but it might be a workaround, perhaps it's worthwhile.
The errors reported here seem similar, and I searched for Conda + Windows 11 + Conflicts and found that issue.
So I ended up buying a new m.2 and completely fresh installing Windows 11 from scratch. First thing I did was install Python 3.9.0, then I installed git, then this one-click installer. It's working perfectly now.
Honestly, I'm not sure if it's worth worrying about this error, I'm sure my Windows setup beforehand was a huge garbled mess that this installer couldn't make sense of.
I'm glad it worked!
The weird thing about these is that they tend to form patterns. Even though the "circumstances" of the installation might be shaky, sometimes they help resolve someone else's problem.
I'll close the issue for now, but thanks for posting it! When others have a similar issue, searching for the logs might point to it. (I will probably add one last comment with the full log for searchability.)
What is your GPU
A) NVIDIA
B) AMD
C) Apple M Series
D) None (I want to run in CPU mode)
Input> A
Collecting package metadata (current_repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
Collecting package metadata (repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
Solving environment: /
Found conflicts! Looking for incompatible packages.
This can take several minutes. Press CTRL-C to abort.
failed /
UnsatisfiableError: The following specifications were found to be incompatible with a past
explicit spec that is not an explicit spec in this operation (ninja):
- cuda-toolkit
- ninja
- pytorch-cuda=11.7 -> cuda=11.7 -> cuda-toolkit[version='>=11.7.0|>=11.7.1']
- pytorch==2[build=py3.10_cuda11.7*] -> pytorch-cuda[version='>=11.7,<11.8']
- torchaudio -> pytorch-cuda[version='11.6.*|11.7.*|11.8.*']
- torchaudio -> pytorch==2.0.1 -> ninja
- torchaudio -> pytorch==2.0.1 -> pytorch-cuda[version='>=11.6,<11.7|>=11.7,<11.8|>=11.8,<11.9']
- torchaudio -> pytorch[version='1.10.0|1.10.1|1.10.2|1.11.0|1.12.0|1.12.1|1.13.0|1.13.1|2.0.0|2.0.1|1.9.1|1.9.0|1.8.1|1.8.0|1.7.1|1.7.0|1.6.0']
- torchvision -> pytorch-cuda[version='11.6.*|11.7.*|11.8.*']
- torchvision -> pytorch==2.0.1 -> ninja
- torchvision -> pytorch==2.0.1 -> pytorch-cuda[version='>=11.6,<11.7|>=11.7,<11.8|>=11.8,<11.9']
- torchvision -> pytorch[version='1.10.0|1.10.1|1.10.2|1.11.0|1.12.0|1.12.1|1.13.0|1.13.1|2.0.0|2.0.1|1.9.1|1.9.0|1.8.1|1.8.0|1.7.1|1.7.0|1.6.0|1.5.1']
The following specifications were found to be incompatible with each other:
Output in format: Requested package -> Available versions
Package vs2015_runtime conflicts for:
libwebp -> vs2015_runtime[version='>=14.16.27012,<15.0a0']
brotlipy -> vc[version='>=14.1,<15.0a0'] -> vs2015_runtime[version='>=14.15.26706|>=14.27.29016|>=14.16.27012']
libpng -> vc[version='>=14.1,<15.0a0'] -> vs2015_runtime[version='>=14.0.25123,<15.0a0|>=14.0.25420|>=14.15.26706|>=14.27.29016|>=14.16.27012']
zlib -> vs2015_runtime[version='>=14.16.27012,<15.0a0']
bzip2 -> vc[version='>=14.1,<15.0a0'] -> vs2015_runtime[version='>=14.0.25123,<15.0a0|>=14.0.25420|>=14.15.26706|>=14.27.29016|>=14.16.27012']
libpng -> vs2015_runtime[version='>=14.16.27012,<15.0a0']
mkl -> vc[version='>=14.1,<15.0a0'] -> vs2015_runtime[version='>=14.15.26706|>=14.27.29016|>=14.16.27012']
lz4-c -> vc[version='>=14.1,<15.0a0'] -> vs2015_runtime[version='>=14.0.25123,<15.0a0|>=14.0.25420|>=14.15.26706|>=14.27.29016|>=14.16.27012']
ninja -> vs2015_runtime[version='>=14.16.27012,<15.0a0']
cryptography -> vs2015_runtime[version='>=14.16.27012,<15.0a0']
win_inet_pton -> python[version='>=3.11,<3.12.0a0'] -> vs2015_runtime[version='>=14.16.27012,<15.0a0']
freetype -> vc[version='>=14.1,<15.0a0'] -> vs2015_runtime[version='>=14.0.25123,<15.0a0|>=14.0.25420|>=14.15.26706|>=14.27.29016|>=14.16.27012']
libuv -> vs2015_runtime[version='>=14.16.27012,<15.0a0']
xz -> vc[version='>=14.1,<15.0a0'] -> vs2015_runtime[version='>=14.0.25123,<15.0a0|>=14.0.25420|>=14.15.26706|>=14.27.29016|>=14.16.27012']
intel-openmp -> vc[version='>=14.1,<15.0a0'] -> vs2015_runtime[version='>=14.15.26706|>=14.27.29016|>=14.16.27012']
lerc -> vc[version='>=14.1,<15.0a0'] -> vs2015_runtime[version='>=14.15.26706|>=14.27.29016|>=14.16.27012']
libuv -> vc[version='>=14.1,<15.0a0'] -> vs2015_runtime[version='>=14.15.26706|>=14.27.29016|>=14.16.27012']
openssl -> vc[version='>=14.1,<15.0a0'] -> vs2015_runtime[version='>=14.0.25123,<15.0a0|>=14.0.25420|>=14.15.26706|>=14.27.29016|>=14.16.27012']
mkl_random -> vs2015_runtime[version='>=14.16.27012,<15.0a0']
ninja-base -> vs2015_runtime[version='>=14.16.27012,<15.0a0']
pip -> python[version='>=3.10,<3.11.0a0'] -> vs2015_runtime[version='>=14.16.27012,<15.0a0']
cffi -> vc[version='>=14.1,<15.0a0'] -> vs2015_runtime[version='>=14.0.25123,<15.0a0|>=14.0.25420|>=14.15.26706|>=14.27.29016|>=14.16.27012']
cryptography -> vc[version='>=14.1,<15.0a0'] -> vs2015_runtime[version='>=14.0.25123,<15.0a0|>=14.0.25420|>=14.15.26706|>=14.27.29016|>=14.16.27012']
vc -> vs2015_runtime[version='>=14.0.25123,<15.0a0|>=14.0.25420|>=14.15.26706|>=14.27.29016|>=14.16.27012']
zstd -> vs2015_runtime[version='>=14.16.27012,<15.0a0']
jpeg -> vc[version='>=14.1,<15.0a0'] -> vs2015_runtime[version='>=14.0.25123,<15.0a0|>=14.0.25420|>=14.15.26706|>=14.27.29016|>=14.16.27012']
intel-openmp -> vs2015_runtime[version='>=14.16.27012,<15.0a0']
numpy-base -> vs2015_runtime[version='>=14.16.27012,<15.0a0|>=14.27.29016,<15.0a0']
tk -> vs2015_runtime[version='>=14.16.27012,<15.0a0']
xz -> vs2015_runtime[version='>=14.16.27012,<15.0a0']
mkl-service -> vs2015_runtime[version='>=14.16.27012,<15.0a0']
markupsafe -> vs2015_runtime[version='>=14.16.27012,<15.0a0']
typing_extensions -> python[version='>=3.8,<3.9.0a0'] -> vs2015_runtime[version='>=14.16.27012,<15.0a0']
pytorch==2[build=py3.10_cuda11.7*] -> intel-openmp -> vs2015_runtime[version='>=14.16.27012,<15.0a0']
tk -> vc[version='>=14.1,<15.0a0'] -> vs2015_runtime[version='>=14.0.25123,<15.0a0|>=14.0.25420|>=14.15.26706|>=14.27.29016|>=14.16.27012']
certifi -> python[version='>=3.7'] -> vs2015_runtime[version='>=14.16.27012,<15.0a0']
libwebp-base -> vs2015_runtime[version='>=14.16.27012,<15.0a0']
torchaudio -> numpy[version='>=1.11'] -> vs2015_runtime[version='>=14.15.26706|>=14.27.29016|>=14.27.29016,<15.0a0']
brotlipy -> vs2015_runtime[version='>=14.16.27012,<15.0a0']
tbb -> vs2015_runtime[version='>=14.16.27012,<15.0a0']
libdeflate -> vs2015_runtime[version='>=14.16.27012,<15.0a0']
libtiff -> vs2015_runtime[version='>=14.16.27012,<15.0a0']
pysocks -> python[version='>=3.7,<3.8.0a0'] -> vs2015_runtime[version='>=14.16.27012,<15.0a0']
libwebp -> vc[version='>=14.1,<15.0a0'] -> vs2015_runtime[version='>=14.0.25123,<15.0a0|>=14.0.25420|>=14.15.26706|>=14.27.29016|>=14.16.27012']
lerc -> vs2015_runtime[version='>=14.16.27012,<15.0a0']
setuptools -> python[version='>=3.10,<3.11.0a0'] -> vs2015_runtime[version='>=14.16.27012,<15.0a0']
libtiff -> vc[version='>=14.1,<15.0a0'] -> vs2015_runtime[version='>=14.0.25123,<15.0a0|>=14.0.25420|>=14.15.26706|>=14.27.29016|>=14.16.27012']
jinja2 -> markupsafe[version='>=2.0'] -> vs2015_runtime[version='>=14.16.27012,<15.0a0']
pyopenssl -> cryptography[version='>=38.0.0,<40'] -> vs2015_runtime[version='>=14.16.27012,<15.0a0']
ninja -> python -> vs2015_runtime[version='>=14.0.25123,<15.0a0|>=14.0.25420|>=14.15.26706|>=14.27.29016|>=14.16.27012']
tbb -> vc[version='>=14.1,<15.0a0'] -> vs2015_runtime[version='>=14.0.25123,<15.0a0|>=14.0.25420|>=14.15.26706|>=14.27.29016|>=14.16.27012']
filelock -> python[version='>=3.11,<3.12.0a0'] -> vs2015_runtime[version='>=14.16.27012,<15.0a0']
pycparser -> python[version='>=3.6'] -> vs2015_runtime[version='>=14.16.27012,<15.0a0']
giflib -> vs2015_runtime[version='>=14.16.27012,<15.0a0']
torchvision -> jpeg -> vs2015_runtime[version='>=14.16.27012|>=14.16.27012,<15.0a0|>=14.27.29016']
mkl -> vs2015_runtime[version='>=14.16.27012,<15.0a0']
urllib3 -> brotli-python[version='>=1.0.9'] -> vs2015_runtime[version='>=14.16.27012,<15.0a0']
lz4-c -> vs2015_runtime[version='>=14.16.27012,<15.0a0']
pillow -> vc[version='>=14.1,<15.0a0'] -> vs2015_runtime[version='>=14.0.25123,<15.0a0|>=14.0.25420|>=14.15.26706|>=14.27.29016|>=14.16.27012']
mpmath -> python[version='>=3.11,<3.12.0a0'] -> vs2015_runtime[version='>=14.16.27012,<15.0a0']
markupsafe -> vc[version='>=14.1,<15.0a0'] -> vs2015_runtime[version='>=14.0.25123,<15.0a0|>=14.0.25420|>=14.15.26706|>=14.27.29016|>=14.16.27012']
cffi -> vs2015_runtime[version='>=14.16.27012,<15.0a0']
libffi -> vs2015_runtime[version='>=14.16.27012,<15.0a0']
torchaudio -> vs2015_runtime[version='>=14.16.27012|>=14.16.27012,<15.0a0']
freetype -> vs2015_runtime[version='>=14.16.27012,<15.0a0']
mkl-service -> vc[version='>=14.1,<15.0a0'] -> vs2015_runtime[version='>=14.0.25123,<15.0a0|>=14.0.25420|>=14.15.26706|>=14.27.29016|>=14.16.27012|>=14.27.29016,<15.0a0']
pillow -> vs2015_runtime[version='>=14.16.27012,<15.0a0']
ffmpeg -> vs2015_runtime[version='>=14.16.27012,<15.0a0']
sympy -> python[version='>=3.11,<3.12.0a0'] -> vs2015_runtime[version='>=14.16.27012,<15.0a0']
libffi -> vc[version='>=14.1,<15.0a0'] -> vs2015_runtime[version='>=14.15.26706|>=14.27.29016|>=14.16.27012']
numpy-base -> vc[version='>=14.2,<15.0a0'] -> vs2015_runtime[version='>=14.0.25123,<15.0a0|>=14.0.25420|>=14.15.26706|>=14.27.29016|>=14.16.27012']
torchvision -> vs2015_runtime[version='>=14.27.29016,<15.0a0']
sqlite -> vs2015_runtime[version='>=14.16.27012,<15.0a0']
networkx -> python[version='>=3.11,<3.12.0a0'] -> vs2015_runtime[version='>=14.16.27012,<15.0a0|>=14.27.29016,<15.0a0']
giflib -> vc[version='>=14.1,<15.0a0'] -> vs2015_runtime[version='>=14.15.26706|>=14.27.29016|>=14.16.27012']
mkl_random -> numpy[version='>=1.21,<2.0a0'] -> vs2015_runtime[version='>=14.0.25123,<15.0a0|>=14.0.25420|>=14.15.26706|>=14.27.29016|>=14.27.29016,<15.0a0|>=14.16.27012']
idna -> python[version='>=3.11,<3.12.0a0'] -> vs2015_runtime[version='>=14.16.27012,<15.0a0']
python=3.10 -> vs2015_runtime[version='>=14.16.27012,<15.0a0']
libdeflate -> vc[version='>=14.1,<15.0a0'] -> vs2015_runtime[version='>=14.15.26706|>=14.27.29016|>=14.16.27012']
numpy -> vs2015_runtime[version='>=14.16.27012,<15.0a0|>=14.27.29016,<15.0a0']
sqlite -> vc[version='>=14.1,<15.0a0'] -> vs2015_runtime[version='>=14.0.25123,<15.0a0|>=14.0.25420|>=14.15.26706|>=14.27.29016|>=14.16.27012']
requests -> python[version='>=3.10,<3.11.0a0'] -> vs2015_runtime[version='>=14.16.27012,<15.0a0']
numpy -> vc[version='>=14.2,<15.0a0'] -> vs2015_runtime[version='>=14.0.25123,<15.0a0|>=14.0.25420|>=14.15.26706|>=14.27.29016|>=14.16.27012']
vs2015_runtime
openssl -> vs2015_runtime[version='>=14.16.27012,<15.0a0']
zstd -> vc[version='>=14.1,<15.0a0'] -> vs2015_runtime[version='>=14.0.25123,<15.0a0|>=14.0.25420|>=14.15.26706|>=14.27.29016|>=14.16.27012']
libwebp-base -> vc[version='>=14.1,<15.0a0'] -> vs2015_runtime[version='>=14.15.26706|>=14.27.29016|>=14.16.27012']
python=3.10 -> vc[version='>=14.1,<15.0a0'] -> vs2015_runtime[version='>=14.15.26706|>=14.27.29016|>=14.16.27012']
charset-normalizer -> python[version='>=3.5'] -> vs2015_runtime[version='>=14.16.27012,<15.0a0']
jpeg -> vs2015_runtime[version='>=14.16.27012,<15.0a0']
wheel -> python[version='>=3.11,<3.12.0a0'] -> vs2015_runtime[version='>=14.16.27012,<15.0a0']
ffmpeg -> vc[version='>=14.1,<15.0a0'] -> vs2015_runtime[version='>=14.15.26706|>=14.27.29016|>=14.16.27012']
zlib -> vc[version='>=14.1,<15.0a0'] -> vs2015_runtime[version='>=14.0.25123,<15.0a0|>=14.0.25420|>=14.15.26706|>=14.27.29016|>=14.16.27012']
Package tk conflicts for:
pillow -> tk[version='>=8.6.10,<8.7.0a0|>=8.6.12,<8.7.0a0|>=8.6.8,<8.7.0a0|>=8.6.7,<8.7.0a0']
pysocks -> python[version='>=3.11,<3.12.0a0'] -> tk[version='>=8.6.11,<8.7.0a0|>=8.6.12,<8.7.0a0']
ninja -> python -> tk[version='>=8.6.11,<8.7.0a0|>=8.6.12,<8.7.0a0']
mkl_random -> python[version='>=3.10,<3.11.0a0'] -> tk[version='>=8.6.11,<8.7.0a0|>=8.6.12,<8.7.0a0']
numpy-base -> python[version='>=3.11,<3.12.0a0'] -> tk[version='>=8.6.11,<8.7.0a0|>=8.6.12,<8.7.0a0']
sympy -> python[version='>=3.11,<3.12.0a0'] -> tk[version='>=8.6.11,<8.7.0a0|>=8.6.12,<8.7.0a0']
cffi -> python[version='>=3.11,<3.12.0a0'] -> tk[version='>=8.6.11,<8.7.0a0|>=8.6.12,<8.7.0a0']
mkl-service -> python[version='>=3.11,<3.12.0a0'] -> tk[version='>=8.6.11,<8.7.0a0|>=8.6.12,<8.7.0a0']
numpy -> python[version='>=3.11,<3.12.0a0'] -> tk[version='>=8.6.11,<8.7.0a0|>=8.6.12,<8.7.0a0']
pytorch==2[build=py3.10_cuda11.7*] -> python[version='>=3.10,<3.11.0a0'] -> tk[version='>=8.6.11,<8.7.0a0|>=8.6.12,<8.7.0a0']
python=3.10 -> tk[version='>=8.6.11,<8.7.0a0|>=8.6.12,<8.7.0a0']
filelock -> python[version='>=3.11,<3.12.0a0'] -> tk[version='>=8.6.11,<8.7.0a0|>=8.6.12,<8.7.0a0']
pycparser -> python[version='>=3.6'] -> tk[version='>=8.6.11,<8.7.0a0|>=8.6.12,<8.7.0a0']
mpmath -> python[version='>=3.11,<3.12.0a0'] -> tk[version='>=8.6.11,<8.7.0a0|>=8.6.12,<8.7.0a0']
torchvision -> pillow[version='>=5.3.0,!=8.3.*'] -> tk[version='>=8.6.10,<8.7.0a0|>=8.6.12,<8.7.0a0|>=8.6.8,<8.7.0a0|>=8.6.11,<8.7.0a0|>=8.6.7,<8.7.0a0']
pillow -> python[version='>=3.10,<3.11.0a0'] -> tk[version='>=8.6.11,<8.7.0a0']
charset-normalizer -> python[version='>=3.5'] -> tk[version='>=8.6.11,<8.7.0a0|>=8.6.12,<8.7.0a0']
networkx -> python[version='>=3.11,<3.12.0a0'] -> tk[version='>=8.6.11,<8.7.0a0|>=8.6.12,<8.7.0a0']
jinja2 -> python[version='>=3.11,<3.12.0a0'] -> tk[version='>=8.6.11,<8.7.0a0|>=8.6.12,<8.7.0a0']
idna -> python[version='>=3.11,<3.12.0a0'] -> tk[version='>=8.6.11,<8.7.0a0|>=8.6.12,<8.7.0a0']
pyopenssl -> python[version='>=3.11,<3.12.0a0'] -> tk[version='>=8.6.11,<8.7.0a0|>=8.6.12,<8.7.0a0']
brotlipy -> python[version='>=3.11,<3.12.0a0'] -> tk[version='>=8.6.11,<8.7.0a0|>=8.6.12,<8.7.0a0']
typing_extensions -> python[version='>=3.10,<3.11.0a0'] -> tk[version='>=8.6.11,<8.7.0a0|>=8.6.12,<8.7.0a0']
tk
urllib3 -> python[version='>=3.11,<3.12.0a0'] -> tk[version='>=8.6.11,<8.7.0a0|>=8.6.12,<8.7.0a0']
wheel -> python[version='>=3.11,<3.12.0a0'] -> tk[version='>=8.6.11,<8.7.0a0|>=8.6.12,<8.7.0a0']
cryptography -> python[version='>=3.11,<3.12.0a0'] -> tk[version='>=8.6.11,<8.7.0a0|>=8.6.12,<8.7.0a0']
pip -> python[version='>=3.10,<3.11.0a0'] -> tk[version='>=8.6.11,<8.7.0a0|>=8.6.12,<8.7.0a0']
win_inet_pton -> python[version='>=3.11,<3.12.0a0'] -> tk[version='>=8.6.11,<8.7.0a0|>=8.6.12,<8.7.0a0']
requests -> python[version='>=3.10,<3.11.0a0'] -> tk[version='>=8.6.11,<8.7.0a0|>=8.6.12,<8.7.0a0']
torchaudio -> python[version='>=3.11,<3.12.0a0'] -> tk[version='>=8.6.11,<8.7.0a0|>=8.6.12,<8.7.0a0']
setuptools -> python[version='>=3.10,<3.11.0a0'] -> tk[version='>=8.6.11,<8.7.0a0|>=8.6.12,<8.7.0a0']
certifi -> python[version='>=3.7'] -> tk[version='>=8.6.11,<8.7.0a0|>=8.6.12,<8.7.0a0']
markupsafe -> python[version='>=3.11,<3.12.0a0'] -> tk[version='>=8.6.11,<8.7.0a0|>=8.6.12,<8.7.0a0']
Package six conflicts for:
pyopenssl -> cryptography[version='>=3.3'] -> six[version='>=1.4.1']
pyopenssl -> six[version='>=1.5.2']
pip -> html5lib -> six[version='>=1.9']
mkl-service -> six
cryptography -> six[version='>=1.4.1']
urllib3 -> cryptography[version='>=1.3.4'] -> six[version='>=1.4.1|>=1.5.2']
mkl_random -> mkl-service[version='>=2.3.0,<3.0a0'] -> six
numpy-base -> mkl-service[version='>=2.3.0,<3.0a0'] -> six
mkl_fft -> mkl-service[version='>=2.3.0,<3.0a0'] -> six
numpy -> mkl-service[version='>=2.3.0,<3.0a0'] -> six
Package libcufft conflicts for:
cuda-toolkit -> cuda-libraries[version='>=11.7.0'] -> libcufft[version='>=10.4.2.58|>=10.5.0.43|>=10.5.1.100|>=10.5.2.100|>=10.6.0.107|>=10.7.0.55|>=10.7.1.112|>=10.7.2.124|>=10.7.2.50|>=11.0.8.15|>=11.0.2.54|>=11.0.2.4|>=11.0.1.95|>=11.0.0.21|>=10.9.0.58|>=10.7.2.91|>=10.6.0.54']
pytorch-cuda=11.7 -> libcufft[version='>=10.7.2.50,<10.9.0.58']
pytorch-cuda=11.7 -> cuda-libraries[version='>=11.7,<11.8'] -> libcufft[version='>=10.7.2.124|>=10.7.2.50|>=10.7.2.91']
cuda-libraries-dev -> libcufft-dev[version='>=10.7.2.50'] -> libcufft[version='>=10.5.0.43|>=10.5.1.100|>=10.5.2.100|>=10.6.0.107|>=10.7.0.55|>=10.7.1.112|>=10.7.2.124|>=10.7.2.50|>=11.0.8.15|>=11.0.2.54|>=11.0.2.4|>=11.0.1.95|>=11.0.0.21|>=10.9.0.58|>=10.7.2.91|>=10.6.0.54']
cuda-runtime -> cuda-libraries[version='>=11.7.0'] -> libcufft[version='>=10.4.2.58|>=10.5.0.43|>=10.5.1.100|>=10.5.2.100|>=10.6.0.107|>=10.7.0.55|>=10.7.1.112|>=10.7.2.124|>=10.7.2.50|>=11.0.8.15|>=11.0.2.54|>=11.0.2.4|>=11.0.1.95|>=11.0.0.21|>=10.9.0.58|>=10.7.2.91|>=10.6.0.54']
libcufft
cuda-libraries -> libcufft[version='>=10.4.2.58|>=10.5.0.43|>=10.5.1.100|>=10.5.2.100|>=10.6.0.107|>=10.7.0.55|>=10.7.1.112|>=10.7.2.124|>=10.7.2.50|>=11.0.8.15|>=11.0.2.54|>=11.0.2.4|>=11.0.1.95|>=11.0.0.21|>=10.9.0.58|>=10.7.2.91|>=10.6.0.54']
torchaudio -> pytorch-cuda=11.7 -> libcufft[version='>=10.7.2.50,<10.9.0.58|>=10.9.0.58,<11.0.0.21']
torchvision -> pytorch-cuda=11.8 -> libcufft[version='>=10.7.2.50,<10.9.0.58|>=10.9.0.58,<11.0.0.21']
pytorch==2[build=py3.10_cuda11.7*] -> pytorch-cuda[version='>=11.7,<11.8'] -> libcufft[version='>=10.7.2.50,<10.9.0.58']
libcufft-dev -> libcufft[version='>=10.5.0.43|>=10.5.1.100|>=10.5.2.100|>=10.6.0.107|>=10.7.0.55|>=10.7.1.112|>=10.7.2.124|>=10.7.2.50|>=11.0.8.15|>=11.0.2.54|>=11.0.2.4|>=11.0.1.95|>=11.0.0.21|>=10.9.0.58|>=10.7.2.91|>=10.6.0.54']
Package libcurand conflicts for:
libcurand
pytorch-cuda=11.7 -> cuda-libraries[version='>=11.7,<11.8'] -> libcurand[version='>=10.2.10.50|>=10.2.10.91']
cuda-toolkit -> cuda-libraries[version='>=11.7.0'] -> libcurand[version='>=10.2.10.50|>=10.3.3.53|>=10.3.2.106|>=10.3.2.56|>=10.3.1.124|>=10.3.1.50|>=10.3.0.86|>=10.2.10.91|>=10.2.9.124|>=10.2.9.55|>=10.2.7.107|>=10.2.6.48|>=10.2.5.120|>=10.2.5.100|>=10.2.5.43|>=10.2.4.58']
cuda-libraries -> libcurand[version='>=10.2.10.50|>=10.3.3.53|>=10.3.2.106|>=10.3.2.56|>=10.3.1.124|>=10.3.1.50|>=10.3.0.86|>=10.2.10.91|>=10.2.9.124|>=10.2.9.55|>=10.2.7.107|>=10.2.6.48|>=10.2.5.120|>=10.2.5.100|>=10.2.5.43|>=10.2.4.58']
libcurand-dev -> libcurand[version='>=10.2.10.50|>=10.3.3.53|>=10.3.2.106|>=10.3.2.56|>=10.3.1.124|>=10.3.1.50|>=10.3.0.86|>=10.2.10.91|>=10.2.9.124|>=10.2.9.55|>=10.2.7.107|>=10.2.6.48|>=10.2.5.120|>=10.2.5.100|>=10.2.5.43']
cuda-libraries-dev -> libcurand-dev[version='>=10.2.10.50'] -> libcurand[version='>=10.2.10.50|>=10.3.3.53|>=10.3.2.106|>=10.3.2.56|>=10.3.1.124|>=10.3.1.50|>=10.3.0.86|>=10.2.10.91|>=10.2.9.124|>=10.2.9.55|>=10.2.7.107|>=10.2.6.48|>=10.2.5.120|>=10.2.5.100|>=10.2.5.43']
cuda-runtime -> cuda-libraries[version='>=11.7.0'] -> libcurand[version='>=10.2.10.50|>=10.3.3.53|>=10.3.2.106|>=10.3.2.56|>=10.3.1.124|>=10.3.1.50|>=10.3.0.86|>=10.2.10.91|>=10.2.9.124|>=10.2.9.55|>=10.2.7.107|>=10.2.6.48|>=10.2.5.120|>=10.2.5.100|>=10.2.5.43|>=10.2.4.58']
...
Collecting package metadata (current_repodata.json): done
Solving environment: done
==> WARNING: A newer version of conda exists. <==
current version: 23.1.0
latest version: 23.5.2
Please update conda by running
$ conda update -n base -c defaults conda
Or to minimize the number of packages updated during conda update use
conda install conda=23.5.2
## Package Plan ##
environment location: C:\Users\_\Desktop\one-click-installers-tts-6.0\one-click-installers-tts-6.0\installer_files\env
added / updated specs:
- ffmpeg
The following packages will be UPDATED:
ca-certificates conda-forge::ca-certificates-2023.5.7~ --> pkgs/main::ca-certificates-2023.05.30-haa95532_0
The following packages will be SUPERSEDED by a higher-priority channel:
certifi conda-forge/noarch::certifi-2023.5.7-~ --> pkgs/main/win-64::certifi-2023.5.7-py310haa95532_0
Downloading and Extracting Packages
Preparing transaction: done
Verifying transaction: done
Executing transaction: done
Collecting package metadata (current_repodata.json): done
Solving environment: done
==> WARNING: A newer version of conda exists. <==
current version: 23.1.0
latest version: 23.5.2
Please update conda by running
$ conda update -n base -c defaults conda
Or to minimize the number of packages updated during conda update use
conda install conda=23.5.2
## Package Plan ##
environment location: C:\Users\_\Desktop\one-click-installers-tts-6.0\one-click-installers-tts-6.0\installer_files\env
added / updated specs:
- nodejs=18.16.1
The following packages will be SUPERSEDED by a higher-priority channel:
ca-certificates pkgs/main::ca-certificates-2023.05.30~ --> conda-forge::ca-certificates-2023.5.7-h56e8100_0
certifi pkgs/main/win-64::certifi-2023.5.7-py~ --> conda-forge/noarch::certifi-2023.5.7-pyhd8ed1ab_0
Downloading and Extracting Packages
Preparing transaction: done
Verifying transaction: done
Executing transaction: done
Cloning into 'tts-generation-webui'...
remote: Enumerating objects: 903, done.
remote: Counting objects: 100% (167/167), done.
remote: Compressing objects: 100% (60/60), done.
remote: Total 903 (delta 132), reused 107 (delta 107), pack-reused 736
Receiving objects: 99% (894/903), 21.18 MiB | 316.00 KiB/s
Receiving objects: 100% (903/903), 21.28 MiB | 320.00 KiB/s, done.
Resolving deltas: 100% (437/437), done.
Already up to date.
Collecting suno-bark@ git+https://github.com/suno-ai/bark@599fed040e52c89e0b3580e02e2684b2c9100701#egg=suno-bark (from -r requirements.txt (line 6))
Using cached suno_bark-0.0.1a0-py3-none-any.whl
Collecting tortoise@ git+https://github.com/rsxdalv/tortoise-tts@3796069550ef6ef62d9fb48372b9789f8f3019c5#egg=tortoise (from -r requirements.txt (line 8))
Using cached TorToiSe-2.5.0-py3-none-any.whl
Updating dependencies...
Installing musicgen, audiocraft dependencies...
Collecting audiocraft@ git+https://git@github.com/facebookresearch/audiocraft@d874966#egg=audiocraft (from -r requirements_audiocraft.txt (line 2))
Cloning https://****@github.com/facebookresearch/audiocraft (to revision d874966) to c:\users\_\appdata\local\temp\pip-install-3961xsz_\audiocraft_fa31ba35911c4477be78791e17f625cd
Running command git clone --filter=blob:none --quiet 'https://****@github.com/facebookresearch/audiocraft' 'C:\Users\_\AppData\Local\Temp\pip-install-3961xsz_\audiocraft_fa31ba35911c4477be78791e17f625cd'
WARNING: Did not find branch or tag 'd874966', assuming revision or ref.
Running command git checkout -q d874966
Resolved https://****@github.com/facebookresearch/audiocraft to commit d874966
Preparing metadata (setup.py) ... done
Collecting hydra-core>=1.1 (from audiocraft@ git+https://git@github.com/facebookresearch/audiocraft@d874966#egg=audiocraft->-r requirements_audiocraft.txt (line 2))
Using cached hydra_core-1.3.2-py3-none-any.whl (154 kB)
Collecting omegaconf<2.4,>=2.2 (from hydra-core>=1.1->audiocraft@ git+https://git@github.com/facebookresearch/audiocraft@d874966#egg=audiocraft->-r requirements_audiocraft.txt (line 2))
Using cached omegaconf-2.3.0-py3-none-any.whl (79 kB)
Collecting antlr4-python3-runtime==4.9.* (from hydra-core>=1.1->audiocraft@ git+https://git@github.com/facebookresearch/audiocraft@d874966#egg=audiocraft->-r requirements_audiocraft.txt (line 2))
Using cached antlr4_python3_runtime-4.9.3-py3-none-any.whl
Installing collected packages: antlr4-python3-runtime, omegaconf, hydra-core
Attempting uninstall: antlr4-python3-runtime
Found existing installation: antlr4-python3-runtime 4.8
Uninstalling antlr4-python3-runtime-4.8:
Successfully uninstalled antlr4-python3-runtime-4.8
Attempting uninstall: omegaconf
Found existing installation: omegaconf 2.0.6
Uninstalling omegaconf-2.0.6:
Successfully uninstalled omegaconf-2.0.6
Attempting uninstall: hydra-core
Found existing installation: hydra-core 1.0.7
Uninstalling hydra-core-1.0.7:
Successfully uninstalled hydra-core-1.0.7
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
fairseq 0.12.4 requires hydra-core<1.1,>=1.0.7, but you have hydra-core 1.3.2 which is incompatible.
fairseq 0.12.4 requires omegaconf<2.1, but you have omegaconf 2.3.0 which is incompatible.
Successfully installed antlr4-python3-runtime-4.9.3 hydra-core-1.3.2 omegaconf-2.3.0
Successfully installed musicgen, audiocraft dependencies
Installing Bark Voice Clone, bark-hubert-quantizer dependencies...
Collecting fairseq@ https://github.com/Sharrnah/fairseq/releases/download/v0.12.4/fairseq-0.12.4-cp310-cp310-win_amd64.whl (from -r requirements_bark_hubert_quantizer.txt (line 2))
Downloading https://github.com/Sharrnah/fairseq/releases/download/v0.12.4/fairseq-0.12.4-cp310-cp310-win_amd64.whl (11.0 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 11.0/11.0 MB 12.8 MB/s eta 0:00:00
Collecting bark_hubert_quantizer@ git+https://github.com/rsxdalv/bark-voice-cloning-HuBERT-quantizer@bark_hubert_quantizer#egg=bark_hubert_quantizer (from -r requirements_bark_hubert_quantizer.txt (line 3))
Cloning https://github.com/rsxdalv/bark-voice-cloning-HuBERT-quantizer (to revision bark_hubert_quantizer) to c:\users\_\appdata\local\temp\pip-install-qpgm0lf_\bark-hubert-quantizer_c45341b2e9ae4204b87557bdac06ec8d
Running command git clone --filter=blob:none --quiet https://github.com/rsxdalv/bark-voice-cloning-HuBERT-quantizer 'C:\Users\_\AppData\Local\Temp\pip-install-qpgm0lf_\bark-hubert-quantizer_c45341b2e9ae4204b87557bdac06ec8d'
Running command git checkout -b bark_hubert_quantizer --track origin/bark_hubert_quantizer
branch 'bark_hubert_quantizer' set up to track 'origin/bark_hubert_quantizer'.
Switched to a new branch 'bark_hubert_quantizer'
Resolved https://github.com/rsxdalv/bark-voice-cloning-HuBERT-quantizer to commit c982344158b811f1056a59ea6c285b6e4501a631
Preparing metadata (setup.py) ... done
Collecting hydra-core<1.1,>=1.0.7 (from fairseq@ https://github.com/Sharrnah/fairseq/releases/download/v0.12.4/fairseq-0.12.4-cp310-cp310-win_amd64.whl->-r requirements_bark_hubert_quantizer.txt (line 2))
Using cached hydra_core-1.0.7-py3-none-any.whl (123 kB)
Collecting omegaconf<2.1 (from fairseq@ https://github.com/Sharrnah/fairseq/releases/download/v0.12.4/fairseq-0.12.4-cp310-cp310-win_amd64.whl->-r requirements_bark_hubert_quantizer.txt (line 2))
Using cached omegaconf-2.0.6-py3-none-any.whl (36 kB)
Collecting antlr4-python3-runtime==4.8 (from hydra-core<1.1,>=1.0.7->fairseq@ https://github.com/Sharrnah/fairseq/releases/download/v0.12.4/fairseq-0.12.4-cp310-cp310-win_amd64.whl->-r requirements_bark_hubert_quantizer.txt (line 2))
Using cached antlr4_python3_runtime-4.8-py3-none-any.whl
Installing collected packages: antlr4-python3-runtime, omegaconf, hydra-core
Attempting uninstall: antlr4-python3-runtime
Found existing installation: antlr4-python3-runtime 4.9.3
Uninstalling antlr4-python3-runtime-4.9.3:
Successfully uninstalled antlr4-python3-runtime-4.9.3
Attempting uninstall: omegaconf
Found existing installation: omegaconf 2.3.0
Uninstalling omegaconf-2.3.0:
Successfully uninstalled omegaconf-2.3.0
Attempting uninstall: hydra-core
Found existing installation: hydra-core 1.3.2
Uninstalling hydra-core-1.3.2:
Successfully uninstalled hydra-core-1.3.2
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
audiocraft 0.0.2a2 requires hydra-core>=1.1, but you have hydra-core 1.0.7 which is incompatible.
Successfully installed antlr4-python3-runtime-4.8 hydra-core-1.0.7 omegaconf-2.0.6
Successfully installed Bark Voice Clone, bark-hubert-quantizer dependencies
Installing RVC dependencies...
Collecting torchcrepe@ git+https://github.com/rsxdalv/torchcrepe@patch-1 (from -r requirements_rvc.txt (line 1))
Cloning https://github.com/rsxdalv/torchcrepe (to revision patch-1) to c:\users\_\appdata\local\temp\pip-install-m_7f4q84\torchcrepe_29270f7b2a70470bb8652fa03bde5b8b
Running command git clone --filter=blob:none --quiet https://github.com/rsxdalv/torchcrepe 'C:\Users\_\AppData\Local\Temp\pip-install-m_7f4q84\torchcrepe_29270f7b2a70470bb8652fa03bde5b8b'
Running command git checkout -b patch-1 --track origin/patch-1
branch 'patch-1' set up to track 'origin/patch-1'.
Switched to a new branch 'patch-1'
Resolved https://github.com/rsxdalv/torchcrepe to commit 9cc34800fe2f2fce2f6f665dc8a4dfc48a371e39
Preparing metadata (setup.py) ... done
Collecting rvc-beta@ git+https://github.com/rsxdalv/Retrieval-based-Voice-Conversion-WebUI@package (from -r requirements_rvc.txt (line 2))
Cloning https://github.com/rsxdalv/Retrieval-based-Voice-Conversion-WebUI (to revision package) to c:\users\_\appdata\local\temp\pip-install-m_7f4q84\rvc-beta_b903458d2f5e48a5823bb887550ab1b3
Running command git clone --filter=blob:none --quiet https://github.com/rsxdalv/Retrieval-based-Voice-Conversion-WebUI 'C:\Users\_\AppData\Local\Temp\pip-install-m_7f4q84\rvc-beta_b903458d2f5e48a5823bb887550ab1b3'
Running command git checkout -b package --track origin/package
branch 'package' set up to track 'origin/package'.
Switched to a new branch 'package'
Resolved https://github.com/rsxdalv/Retrieval-based-Voice-Conversion-WebUI to commit b27fea3d85a3f7f1e33be031c6bac206b080e8ca
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... done
Successfully installed RVC dependencies
Env file not found. Creating default env.
Traceback (most recent call last):
File "C:\Users\_\Desktop\one-click-installers-tts-6.0\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\requests\compat.py", line 11, in
Trying this on an Nvidia GPU, the 1650 Super to be exact. The entire installation process seemed to have gone fine, I selected Nvidia when asked. No issues.
However, at the end I got this error:
Then it said press any key to continue. The instructions.txt isn't really clear on how to run the actual webui. So I tried running the webui.py and it just says
Conda is not installed. Exiting...
in the terminal and doesn't run.I then tried to run the server.py just in case, and for some reason it said I didn't have gradio installed. So I went ahead and installed gradio just by doing pip install gradio. This let the script run, but similar to the one click installer terminal error, it can't find tortoise. I'm also not sure if that's the only issue, or if the one-click installer wanted to continue doing things after that, but stopped when that failed.