Open Jake36921 opened 11 months ago
when you starting ./api_inference_server.py there should be an option to choose to use GPU (it's optional) please check if you select it, This is line 14 of api_inference_server.py this will be default to CPU if you press "ENTER" and not "Y" then "Enter"
device = torch.device('cpu') # default to cpu
use_gpu = torch.cuda.is_available()
print("Detecting GPU...")
if use_gpu:
print("GPU detected!")
device = torch.device('cuda')
print("Using GPU? (Y/N)")
if input().lower() == 'y':
print("Using GPU...")
else:
print("Using CPU...")
use_gpu = False
device = torch.device('cpu')
but from what I'm seeing I think it didn't get Detected at all, this could occur in case of your driver problem try typing these in you command line
nvidia-smi
nvcc --version
both of these should output something that doesn't look like and error if not it mean that you need to install CUDA driver for your GPU, maybe try manual install pytorch with cuda https://pytorch.org/get-started/locally/
Cuda seems to be installed from what I've read
This Is Legit that you have a driver, but please check if your driver is the correct version. https://stackoverflow.com/questions/60987997/why-torch-cuda-is-available-returns-false-even-after-installing-pytorch-with
and also with PyTorch installation if you installed it with GPU support https://pytorch.org/get-started/locally/
The newest Nvidia drivers (536.67) probably doesn't support the cuda version I have. I'll try to see if rolling back the drivers would fix it.
It still doesn't detect the gpu after rolling back to previous drivers and installing pytorch with cuda 11.8
this maybe a little bit troublesome but can you try activate your environment and type the fllowing?
python
import torch
print(torch.cuda.is_available())
if it's return true? I need to know if it's a problem with Torch
this maybe a little bit troblesome can you try activate your environment and type the following?
python
# you are in python shell
import torch
print(torch.cuda.is_available())
if it's return true? I need to know if it's a problem with Torch if it's return true then it's my code that's a problem
(venv) PS E:\etc\AIwaifu> python Python 3.9.12 (main, Apr 4 2022, 05:22:27) [MSC v.1916 64 bit (AMD64)] :: Anaconda, Inc. on win32
Warning: This Python interpreter is in a conda environment, but the environment has not been activated. Libraries may fail to load. To activate this environment please see https://conda.io/activation
Type "help", "copyright", "credits" or "license" for more information.
import torch print(torch.cuda.is_available()) False
I also double checked if torch is installed
(venv) PS E:\etc\AIwaifu> pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118 Looking in indexes: https://download.pytorch.org/whl/cu118 Requirement already satisfied: torch in e:\etc\aiwaifu\venv\lib\site-packages (2.0.1) Requirement already satisfied: torchvision in e:\etc\aiwaifu\venv\lib\site-packages (0.15.2+cu118) Requirement already satisfied: torchaudio in e:\etc\aiwaifu\venv\lib\site-packages (2.0.2) Requirement already satisfied: filelock in e:\etc\aiwaifu\venv\lib\site-packages (from torch) (3.12.2) Requirement already satisfied: typing-extensions in e:\etc\aiwaifu\venv\lib\site-packages (from torch) (4.7.1) Requirement already satisfied: sympy in e:\etc\aiwaifu\venv\lib\site-packages (from torch) (1.12) Requirement already satisfied: networkx in e:\etc\aiwaifu\venv\lib\site-packages (from torch) (3.1) Requirement already satisfied: jinja2 in e:\etc\aiwaifu\venv\lib\site-packages (from torch) (3.1.2) Requirement already satisfied: numpy in e:\etc\aiwaifu\venv\lib\site-packages (from torchvision) (1.24.4) Requirement already satisfied: requests in e:\etc\aiwaifu\venv\lib\site-packages (from torchvision) (2.31.0) Requirement already satisfied: pillow!=8.3.*,>=5.3.0 in e:\etc\aiwaifu\venv\lib\site-packages (from torchvision) (9.3.0) Requirement already satisfied: MarkupSafe>=2.0 in e:\etc\aiwaifu\venv\lib\site-packages (from jinja2->torch) (2.1.3) Requirement already satisfied: charset-normalizer<4,>=2 in e:\etc\aiwaifu\venv\lib\site-packages (from requests->torchvision) (3.2.0) Requirement already satisfied: idna<4,>=2.5 in e:\etc\aiwaifu\venv\lib\site-packages (from requests->torchvision) (3.4) Requirement already satisfied: urllib3<3,>=1.21.1 in e:\etc\aiwaifu\venv\lib\site-packages (from requests->torchvision) (2.0.3) Requirement already satisfied: certifi>=2017.4.17 in e:\etc\aiwaifu\venv\lib\site-packages (from requests->torchvision) (2023.5.7) Requirement already satisfied: mpmath>=0.19 in e:\etc\aiwaifu\venv\lib\site-packages (from sympy->torch) (1.3.0) (venv) PS E:\etc\AIwaifu>
try using poetry installation method provided in the readme instead
Most of the file installations worked fine but pyopenjtalk caused errors
(E:\etc\AIwaifu\envs) PS E:\etc\AIwaifu> poetry install Installing dependencies from lock file
Package operations: 1 install, 0 updates, 0 removals
• Installing pyopenjtalk (0.3.0)
ChefBuildError
Backend subprocess exited when trying to invoke get_requires_for_build_wheel
Traceback (most recent call last):
File "E:\etc\AIwaifu\envs\lib\site-packages\pyproject_hooks_in_process_in_process.py", line 353, in
at envs\lib\site-packages\poetry\installation\chef.py:147 in _prepare 143│ 144│ error = ChefBuildError("\n\n".join(message_parts)) 145│ 146│ if error is not None: → 147│ raise error from None 148│ 149│ return path 150│ 151│ def _prepare_sdist(self, archive: Path, destination: Path | None = None) -> Path:
Note: This error originates from the build backend, and is likely not a problem with poetry but with pyopenjtalk (0.3.0) not supporting PEP 517 builds. You can verify this by running 'pip wheel --use-pep517 "pyopenjtalk (==0.3.0)"'.
(E:\etc\AIwaifu\envs) PS E:\etc\AIwaifu>
try
poetry run pip install --no-use-pep517 pyopenjtalk==0.3.0
Doesn't work either (E:\etc\AIwaifu\envs) PS E:\etc\AIwaifu> poetry run pip install --no-use-pep517 pyopenjtalk==0.3.0 Collecting pyopenjtalk==0.3.0 Using cached pyopenjtalk-0.3.0.tar.gz (1.5 MB) Preparing metadata (setup.py) ... error error: subprocess-exited-with-error
× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> [14 lines of output]
C:\Users\rapha\AppData\Local\Temp\pip-install-znxh0qf3\pyopenjtalk_1b86cc28ee3946f4a72d6ca7a097986c\setup.py:26: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead.
_CYTHON_INSTALLED = ver >= LooseVersion(min_cython_ver)
Traceback (most recent call last):
File "
note: This error originates from a subprocess, and is likely not a problem with pip. error: metadata-generation-failed
× Encountered error while generating package metadata. ╰─> See above for output.
note: This is an issue with the package mentioned above, not pip. hint: See above for details.
I'll try to solve this at my end, please bear it until next released
I have this gpu and I'll take a look and see if I can reproduce this issue
I have this gpu and I'll take a look and see if I can reproduce this issue
Thx! please notify me in Discord if you saw what's the problem and don't have enough time to fix it!
I've got my 4080 16GB delivered now I'll begin reproducing it
everything works fine except its using my cpu instead of the gpu.
![image](https://github.com/HRNPH/AIwaifu/assets/113159896/afc70447-88d1-4b2f-82fd-19dc4317f930)