Closed FlameinfirenBr closed 8 months ago
Check which torch is installed
Which version
@rsxdalv 2.0.0 cu114
Edit: Update my pytorch to a more recent version and it still gets the same error, here's my pytorch complete version check
Name: torch Version: 2.0.1+cu117 Summary: Tensors and Dynamic neural networks in Python with strong GPU acceleration Home-page: https://pytorch.org/ Author: PyTorch Team Author-email: packages@pytorch.org License: BSD-3 Location: c:\users\flameinfiren\appdata\local\programs\python\python310\lib\site-packages Requires: filelock, jinja2, networkx, sympy, typing-extensions Required-by: accelerate, encodec, fairscale, kornia, lion-pytorch, lycoris-lora, nomi, pytorch-lightning, suno-bark, test-tube, timm, torch-fidelity, torchaudio, torchmetrics, torchvision, xformers
Edit2: Complete torch traceback
Traceback (most recent call last): File "C:\Users\Flameinfiren\Desktop\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\gradio\routes.py", line 437, in run_predict output = await app.get_blocks().process_api( File "C:\Users\Flameinfiren\Desktop\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\gradio\blocks.py", line 1352, in process_api result = await self.call_function( File "C:\Users\Flameinfiren\Desktop\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\gradio\blocks.py", line 1077, in call_function prediction = await anyio.to_thread.run_sync( File "C:\Users\Flameinfiren\Desktop\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\anyio\to_thread.py", line 33, in run_sync return await get_asynclib().run_sync_in_worker_thread( File "C:\Users\Flameinfiren\Desktop\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\anyio_backends_asyncio.py", line 877, in run_sync_in_worker_thread return await future File "C:\Users\Flameinfiren\Desktop\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\anyio_backends_asyncio.py", line 807, in run result = context.run(func, *args) File "C:\Users\Flameinfiren\Desktop\one-click-installers-tts-6.0\tts-generation-webui\src\bark\clone\tab_voice_clone.py", line 225, in generate_voice full_generation = get_prompts(wav_file, use_gpu) File "C:\Users\Flameinfiren\Desktop\one-click-installers-tts-6.0\tts-generation-webui\src\bark\clone\tab_voice_clone.py", line 87, in get_prompts semantic_prompt = get_semantic_prompt(path_to_wav, device) File "C:\Users\Flameinfiren\Desktop\one-click-installers-tts-6.0\tts-generation-webui\src\bark\clone\tab_voice_clone.py", line 81, in get_semantic_prompt semantic_vectors = get_semantic_vectors(path_to_wav, device) File "C:\Users\Flameinfiren\Desktop\one-click-installers-tts-6.0\tts-generation-webui\src\bark\clone\tab_voice_clone.py", line 46, in get_semantic_vectors hubert_model = _load_hubert_model(device) File "C:\Users\Flameinfiren\Desktop\one-click-installers-tts-6.0\tts-generation-webui\src\bark\clone\tab_voice_clone.py", line 26, in _load_hubert_model hubert_model = CustomHubert( File "C:\Users\Flameinfiren\Desktop\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\bark_hubert_quantizer\pre_kmeans_hubert.py", line 60, in init checkpoint = torch.load(checkpoint_path, map_location=device) File "C:\Users\Flameinfiren\Desktop\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\torch\serialization.py", line 809, in load return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args) File "C:\Users\Flameinfiren\Desktop\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\torch\serialization.py", line 1172, in _load result = unpickler.load() File "C:\Users\Flameinfiren\Desktop\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\torch\serialization.py", line 1142, in persistent_load typed_storage = load_tensor(dtype, nbytes, key, _maybe_decode_ascii(location)) File "C:\Users\Flameinfiren\Desktop\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\torch\serialization.py", line 1116, in load_tensor wrap_storage=restore_location(storage, location), File "C:\Users\Flameinfiren\Desktop\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\torch\serialization.py", line 1083, in restore_location return default_restore_location(storage, map_location) File "C:\Users\Flameinfiren\Desktop\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\torch\serialization.py", line 217, in default_restore_location result = fn(storage, location) File "C:\Users\Flameinfiren\Desktop\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\torch\serialization.py", line 182, in _cuda_deserialize device = validate_cuda_device(location) File "C:\Users\Flameinfiren\Desktop\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\torch\serialization.py", line 166, in validate_cuda_device raise RuntimeError('Attempting to deserialize object on a CUDA ' RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.
Edit3: The hardware is a RTX 2060 and a i5-11400f
Remove torch Pip uninstall torch Install torch (See https://pytorch.org/get-started/locally/ )
pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
@Aamir3d Sane traceback and problem after doing it
@Aamir3d Sane traceback and problem after doing it
Please try the options in this thread https://discuss.pytorch.org/t/pytorch-installation-rtx-3060/187895/9
Likely a torch+Cuda conflict.
Just looking at the versions I'm not immediately certain what's up. I think I put some safeguards to prevent torch from being reinstalled after the initial environment is set up, but some project somewhere can add "torch==2.0.1" and throw the whole balance out of the window. I am a little bit leaning towards it being a more unique problem, because if you have torch+cu___ then it is the GPU enabled torch.
Additionally, installing with conda is a bit more likely to give the right results. And any time you are trying to modify or install the packages for the project, you need to use the same virtual (conda) environment. This can be done by running cmd_windows to get a terminal that has the conda environment.
For more context, here's a different but potentially related problem: https://github.com/rsxdalv/tts-generation-webui/issues/121
@Aamir3d Tried those things and still nothing sadly @rsxdalv #121 Has more or less the same problem that i had, but his solution did not work for me, fresh installs didn't work either
Edit: It's the exact same traceback, should i also post the installation log when doing a fresh install?
@Aamir3d Tried those things and still nothing sadly @rsxdalv #121 Has more or less the same problem that i had, but his solution did not work for me, fresh installs didn't work either
Edit: It's the exact same traceback, should i also post the installation log when doing a fresh install?
Installation logs are always useful! Also, do other ai projects work for you, such as automatic 1111?
used the one click installer and it fails to detect the numerous gpu's connected to this system.
Hardware: https://pastebin.com/wRVCpcep Log: https://pastebin.com/GcFg8C2P
re-running update [didnt solve]
reboot [didnt solve]
automatic 1111 is installed and working oobabooga web ui also is installed and works
conda activate C:\Users\Tom_N\Desktop\one-click-installers-tts-6.0\installer_files\env
python
import torch
(C:\Users\Tom_N\Desktop\one-click-installers-tts-6.0\installer_files\env) C:\Users\Tom_N\Desktop\one-click-installers-tts-6.0>python Python 3.10.13 | packaged by Anaconda, Inc. | (main, Sep 11 2023, 13:24:38) [MSC v.1916 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information.
import torch import torch print(torch.cuda.is_available()) # This should return True if CUDA is available False print(torch.cuda.device_count()) # This should return the number of GPUs available 0 print(torch.cuda.get_device_name(0)) # This should return the name of the first CUDA device Traceback (most recent call last): File "
", line 1, in File "C:\Users\Tom_N\Desktop\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\torch\cuda__init.py", line 365, in get_device_name return get_device_properties(device).name File "C:\Users\Tom_N\Desktop\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\torch\cuda__init__.py", line 395, in get_device_properties _lazy_init() # will define _get_device_properties File "C:\Users\Tom_N\Desktop\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\torch\cuda\init__.py", line 239, in _lazy_init raise AssertionError("Torch not compiled with CUDA enabled") AssertionError: Torch not compiled with CUDA enabled
installation failed to compile cuda properly and failed.
exit python terminal: ctr;+z or "exit"
get rid of wrong version!
conda uninstall pytorch
install proper version in my case it will be:
conda install pytorch torchvision torchaudio cudatoolkit=12.2 -c pytorch
nope...
conda update -n base -c defaults conda
conda config --prepend channels pytorch
conda install pytorch torchvision torchaudio -c pytorch
pip install chardet charset_normalizer more testing?
pip install -U typing_extensions pip install -U fastapi pip uninstall fastapi uvicorn gradio pip install fastapi uvicorn gradio
pip install gradio==3.34.0 Pillow==9.3.0 uvicorn==0.21.1 pydantic==1.10.13 typer==0.3.0 ?testing
pip install numpy==1.25.0 [solve numpy compatibility]
pip uninstall numba pip install numba
[it loads again! yay!
The short answer is: you have pytorch for CPU, thus it's not using nor detecting the GPUs. The long answer is that it might be due to an installation issue, but it seems to be something new in your case.
In your log I can see several unexpected things. For example, it says that the python version is different. This normally happens only when people don't use the one click installers. Then there's another error that:
I haven't seen this error in a while. But it could indicate the main problem - maybe for your machine, it needs either developer mode or administrator privileges for running the project. I don't remember others reporting this, but maybe everyone already had developer mode, I'm pretty sure I did. (Or this is a new windows update etc etc).
So, although there could be multiple problems at play here, I would recommend first switching to developer mode (unfortunately it might require a reinstall). Many projects use HuggingFace to download model weights.
On Tue, Nov 28, 2023, 11:51 AM Tom-Neverwinter @.***> wrote:
used the one click installer and it fails to detect the numerous gpu's connected to this system.
— Reply to this email directly, view it on GitHub https://github.com/rsxdalv/tts-generation-webui/issues/192#issuecomment-1829028089, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABTRXI6XTKDGEH5U3ESVGZDYGVNTHAVCNFSM6AAAAAA54V3TTCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQMRZGAZDQMBYHE . You are receiving this because you were mentioned.Message ID: @.***>
ok, I'm going to play with it for a while and see what happens. I'll do a quick reinstall as I'm basically re-writing items and making more of a mess then actually solving anything.
I'll try the run as admin first to see if that solves it. just very odd, but intersting.
Thank you
https://chat.openai.com/share/d6fceefc-6c09-43ca-a57b-b7932a329c20 adding in chatgpt for diagnosing. might help with determining the problem.
2023-11-28 01:20:05 | WARNING | xformers | WARNING[XFORMERS]: xFormers can't load C++/CUDA extensions. xFormers was built for:
PyTorch 2.0.0+cu118 with CUDA 1108 (you have 2.0.0+cpu)
Python 3.10.11 (you have 3.10.13)
Please reinstall xformers (see https://github.com/facebookresearch/xformers#installing-xformers)
Memory-efficient attention, SwiGLU, sparse and more won't be available.
Set XFORMERS_MORE_DETAILS=1 for more details
2023-11-28 01:20:05 | WARNING | xformers | Triton is not available, some optimizations will not be enabled.
This is just a warning: No module named 'triton'
needs to ask user if triton or not triton. I'm not using triton
As for triton, it's not necessary but baked deep inside of the models so removing all of the warnings about it isn't doable. From the log, it's still running the CPU pytorch.
On Tue, Nov 28, 2023, 3:22 PM Tom-Neverwinter @.***> wrote:
2023-11-28 01:20:05 | WARNING | xformers | WARNING[XFORMERS]: xFormers can't load C++/CUDA extensions. xFormers was built for: PyTorch 2.0.0+cu118 with CUDA 1108 (you have 2.0.0+cpu) Python 3.10.11 (you have 3.10.13) Please reinstall xformers (see https://github.com/facebookresearch/xformers#installing-xformers) Memory-efficient attention, SwiGLU, sparse and more won't be available. Set XFORMERS_MORE_DETAILS=1 for more details 2023-11-28 01:20:05 | WARNING | xformers | Triton is not available, some optimizations will not be enabled. This is just a warning: No module named 'triton'
needs to ask user if triton or not triton. I'm not using triton
— Reply to this email directly, view it on GitHub https://github.com/rsxdalv/tts-generation-webui/issues/192#issuecomment-1829246821, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABTRXI5LHJOPVYCJNMOUJDDYGWGLXAVCNFSM6AAAAAA54V3TTCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQMRZGI2DMOBSGE . You are receiving this because you were mentioned.Message ID: @.***>
Since this issue was active a few things about installation have been changed so it's hopefully working now. Additionally, I plan on adding a torch manager inside the webui to allow fixing torch issues with GPU/MPS/CPU.
If there are any more problems please reopen this issue.
I'm getting RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU. And i have no idea how to change it