mrhan1993 / Fooocus-API

FastAPI powered API for Fooocus
GNU General Public License v3.0
558 stars 148 forks source link

Torch not compiled with CUDA enabled #84

Closed JubileusDeus closed 8 months ago

JubileusDeus commented 8 months ago

I got this issue when trying to generate a new image : Import default pipeline error: Torch not compiled with CUDA enabled. It also appears when launching the API.

Here is the full exception message :

Exception in thread Thread-1 (preplaod_pipeline): Traceback (most recent call last): File "C:\Users\jdm00\miniconda3\lib\threading.py", line 1016, in _bootstrap_inner self.run() File "C:\Users\jdm00\miniconda3\lib\threading.py", line 953, in run self._target(*self._args, *self._kwargs) File "C:\Projects\Personal\AI\Tools\Fooocus-API\main.py", line 351, in preplaod_pipeline import modules.defaultpipeline as File "C:\Projects\Personal\AI\Tools\Fooocus-API\repositories\Fooocus\modules\default_pipeline.py", line 1, in import modules.core as core File "C:\Projects\Personal\AI\Tools\Fooocus-API\repositories\Fooocus\modules\core.py", line 1, in from modules.patch import patch_all File "C:\Projects\Personal\AI\Tools\Fooocus-API\repositories\Fooocus\modules\patch.py", line 6, in import fcbh.model_base File "C:\Projects\Personal\AI\Tools\Fooocus-API\repositories\Fooocus\backend\headless\fcbh\model_base.py", line 2, in from fcbh.ldm.modules.diffusionmodules.openaimodel import UNetModel File "C:\Projects\Personal\AI\Tools\Fooocus-API\repositories\Fooocus\backend\headless\fcbh\ldm\modules\diffusionmodules\openaimodel.py", line 16, in from ..attention import SpatialTransformer File "C:\Projects\Personal\AI\Tools\Fooocus-API\repositories\Fooocus\backend\headless\fcbh\ldm\modules\attention.py", line 10, in from .sub_quadratic_attention import efficient_dot_product_attention File "C:\Projects\Personal\AI\Tools\Fooocus-API\repositories\Fooocus\backend\headless\fcbh\ldm\modules\sub_quadratic_attention.py", line 27, in from fcbh import model_management File "C:\Projects\Personal\AI\Tools\Fooocus-API\repositories\Fooocus\backend\headless\fcbh\model_management.py", line 114, in total_vram = get_total_memory(get_torch_device()) / (1024 1024) File "C:\Projects\Personal\AI\Tools\Fooocus-API\repositories\Fooocus\backend\headless\fcbh\model_management.py", line 83, in get_torch_device return torch.device(torch.cuda.current_device()) File "C:\Users\jdm00\miniconda3\lib\site-packages\torch\cuda__init__.py", line 674, in current_device _lazy_init() File "C:\Users\jdm00\miniconda3\lib\site-packages\torch\cuda__init__.py", line 239, in _lazy_init raise AssertionError("Torch not compiled with CUDA enabled") AssertionError: Torch not compiled with CUDA enabled

konieshadow commented 8 months ago

It seems like versions of CUDA driver and pytorch not compatible. Can you run nvidia-smi to show me the output whith "CUDA Version" label? And run pip list to find the installed torch version?

JubileusDeus commented 8 months ago

Here is the result of nvidia-smi command :

Mon Dec 11 17:18:09 2023 +---------------------------------------------------------------------------------------+ | NVIDIA-SMI 546.17 Driver Version: 546.17 CUDA Version: 12.3 | |-----------------------------------------+----------------------+----------------------+ | GPU Name TCC/WDDM | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+======================+======================| | 0 NVIDIA GeForce RTX 2060 WDDM | 00000000:01:00.0 Off | N/A | | N/A 61C P8 6W / 90W | 292MiB / 6144MiB | 0% Default | | | | N/A | +-----------------------------------------+----------------------+----------------------+

+---------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=======================================================================================| | 0 N/A N/A 22544 C+G ...Editor\2022.3.12f1\Editor\Unity.exe N/A | +---------------------------------------------------------------------------------------+

And the result of pip list :

Package Version


aiofiles 23.1.0 aiohttp 3.8.4 aiosignal 1.3.1 anyio 3.7.1 async-timeout 4.0.2 attrs 23.1.0 beautifulsoup4 4.9.3 CacheControl 0.13.1 cachetools 5.3.1 certifi 2023.5.7 cffi 1.15.1 charset-normalizer 3.2.0 click 8.1.4 colorama 0.4.6 cryptography 41.0.1 dataclasses-json 0.5.9 exceptiongroup 1.1.2 faiss-cpu 1.7.4 fastapi 0.99.1 fastapi-sessions 0.3.2 filelock 3.12.2 firebase-admin 6.2.0 frozenlist 1.4.0 fsspec 2023.6.0 google-api-core 2.11.1 google-api-python-client 2.92.0 google-auth 2.21.0 google-auth-httplib2 0.1.0 google-cloud 0.34.0 google-cloud-aiplatform 1.27.1 google-cloud-bigquery 3.11.3 google-cloud-core 2.3.3 google-cloud-firestore 2.11.1 google-cloud-resource-manager 1.10.2 google-cloud-storage 2.10.0 google-cloud-vision 3.4.4 google-crc32c 1.5.0 google-resumable-media 2.5.0 googleapis-common-protos 1.59.1 greenlet 2.0.2 grpc-google-iam-v1 0.12.6 grpcio 1.56.0 grpcio-status 1.56.0 h11 0.14.0 httplib2 0.22.0 huggingface-hub 0.16.4 idna 3.4 itsdangerous 2.1.2 Jinja2 3.1.2 joblib 1.3.1 langchain 0.0.149 MarkupSafe 2.1.3 marshmallow 3.19.0 marshmallow-enum 1.5.1 mpmath 1.3.0 msgpack 1.0.5 multidict 6.0.4 mypy-extensions 1.0.0 networkx 3.1 nltk 3.8.1 numexpr 2.8.4 numpy 1.25.1 openapi-schema-pydantic 1.2.4 packaging 23.1 Pillow 10.0.0 pip 23.2 proto-plus 1.22.3 protobuf 4.23.4 pyasn1 0.5.0 pyasn1-modules 0.3.0 pycparser 2.21 pydantic 1.10.11 PyJWT 2.7.0 pyparsing 3.1.0 pyre-extensions 0.0.29 python-dateutil 2.8.2 PyYAML 6.0 regex 2023.6.3 requests 2.28.2 rsa 4.9 safetensors 0.3.1 scikit-learn 1.3.0 scipy 1.11.1 sentence-transformers 2.2.2 sentencepiece 0.1.99 setuptools 65.5.0 Shapely 1.8.5.post1 six 1.16.0 sniffio 1.3.0 soupsieve 2.4.1 SQLAlchemy 2.0.18 starlette 0.27.0 sympy 1.12 tenacity 8.2.2 threadpoolctl 3.2.0 tokenizers 0.13.3 torch 2.0.1 torchaudio 2.0.2+cu118 torchvision 0.15.2 tqdm 4.65.0 transformers 4.30.2 typing_extensions 4.7.1 typing-inspect 0.9.0 uritemplate 4.1.1 urllib3 1.26.16 uvicorn 0.22.0 vertexai 0.0.1 xformers 0.0.20 yarl 1.9.2

konieshadow commented 8 months ago

@JubileusDeus The pytorch version and nvidia-smi outputs look like fine. Did you installed CUDA drivier? You can check it with nvcc --version, and download from official website.

mrhan1993 commented 8 months ago

@JubileusDeus

you can try follow this steps:

pip uninstall torch torchvision -y
pip install torch==2.0.1 torchvision==0.15.2 torchaudio==2.0.2 --index-url https://download.pytorch.org/whl/cu118

and then run python main.py to start server

JubileusDeus commented 8 months ago

@JubileusDeus The pytorch version and nvidia-smi outputs look like fine. Did you installed CUDA drivier? You can check it with nvcc --version, and download from official website.

I ran the command and it's not recognized as a command, which is weird since I use Stable Diffusion and the official Foooocus without problem.

I will download the driver and test it again, thanks !

JubileusDeus commented 8 months ago

@konieshadow After installing the CUDA driver, it stills display the same error. The command nvcc --version is recognized now.

Here is the full message : Python 3.10.10 | packaged by Anaconda, Inc. | (main, Mar 21 2023, 18:39:17) [MSC v.1916 64 bit (AMD64)] Fooocus-API version: 0.3.22 Fooocus exists and URL is correct. Fooocus checkout finished for 3bc9ac88fd8aa05e157aeff78eaa3631552119e0. [Fooocus-API] Task queue size: 3, queue history size: 100 Preload pipeline Exception in thread Thread-1 (preplaod_pipeline): Traceback (most recent call last): File "C:\Users\jdm00\miniconda3\lib\threading.py", line 1016, in _bootstrap_inner self.run() File "C:\Users\jdm00\miniconda3\lib\threading.py", line 953, in run self._target(*self._args, *self._kwargs) File "C:\Projects\Personal\AI\Tools\Fooocus-API\main.py", line 351, in preplaod_pipeline import modules.defaultpipeline as File "C:\Projects\Personal\AI\Tools\Fooocus-API\repositories\Fooocus\modules\default_pipeline.py", line 1, in import modules.core as core File "C:\Projects\Personal\AI\Tools\Fooocus-API\repositories\Fooocus\modules\core.py", line 1, in from modules.patch import patch_all File "C:\Projects\Personal\AI\Tools\Fooocus-API\repositories\Fooocus\modules\patch.py", line 6, in import fcbh.model_base File "C:\Projects\Personal\AI\Tools\Fooocus-API\repositories\Fooocus\backend\headless\fcbh\model_base.py", line 2, in from fcbh.ldm.modules.diffusionmodules.openaimodel import UNetModel File "C:\Projects\Personal\AI\Tools\Fooocus-API\repositories\Fooocus\backend\headless\fcbh\ldm\modules\diffusionmodules\openaimodel.py", line 16, in from ..attention import SpatialTransformer File "C:\Projects\Personal\AI\Tools\Fooocus-API\repositories\Fooocus\backend\headless\fcbh\ldm\modules\attention.py", line 10, in from .sub_quadratic_attention import efficient_dot_product_attention File "C:\Projects\Personal\AI\Tools\Fooocus-API\repositories\Fooocus\backend\headless\fcbh\ldm\modules\sub_quadratic_attention.py", line 27, in from fcbh import model_management File "C:\Projects\Personal\AI\Tools\Fooocus-API\repositories\Fooocus\backend\headless\fcbh\model_management.py", line 114, in total_vram = get_total_memory(get_torch_device()) / (1024 1024) File "C:\Projects\Personal\AI\Tools\Fooocus-API\repositories\Fooocus\backend\headless\fcbh\model_management.py", line 83, in get_torch_device return torch.device(torch.cuda.current_device()) File "C:\Users\jdm00\miniconda3\lib\site-packages\torch\cuda__init__.py", line 674, in current_device _lazy_init() File "C:\Users\jdm00\miniconda3\lib\site-packages\torch\cuda__init__.py", line 239, in _lazy_init raise AssertionError("Torch not compiled with CUDA enabled") AssertionError: Torch not compiled with CUDA enabled


Result of nvcc --version command : nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2023 NVIDIA Corporation Built on Fri_Nov__3_17:51:05_Pacific_Daylight_Time_2023 Cuda compilation tools, release 12.3, V12.3.103 Build cuda_12.3.r12.3/compiler.33492891_0

JubileusDeus commented 8 months ago

@mrhan1993 I already had the latest torch. After running these commands it stills do not work !

mrhan1993 commented 8 months ago

@JubileusDeus OK,Environmental problems are always a headache. Let's start over. Follow this step:

run git pull to get latest code

open your powershell or terminal , if you have conda env activated ,run conda deactivate

# change dir to Fooocus-API , cd balabala
python -m venv venv
.\venv\Scripts\activate
# If it's right command line will like this:(venv) PS E:\Fooocus-API>
pip install -r requirements.txt
pip install torch==2.1.0 torchvision==0.16.0 --extra-index-url https://download.pytorch.org/whl/cu121

after that , run python -c "import torch; print(torch.cuda.is_available())" , if all goes well, command will return True

then run python main.py

i tried use conda , but failed ,so here i use venv to configure environment, i will do more try with conda, you can keep this issue open


update: info below just for conda user, if you work well , do nothing unless you know what you do

The process is similar to using venv when use conda: first change dir to Fooocus-API

conda create -n fooocus-api python=3.10 -y
conda activate fooocus-api

pip install -r requirements.txt
pip install torch==2.1.0 torchvision==0.16.0 --extra-index-url https://download.pytorch.org/whl/cu121

then run python main.py

JubileusDeus commented 8 months ago

@mrhan1993 I tried your suggestions and now it works like a charm. Looks like I will have to better myself in python and environments management. I can now generate images using the API without problems. Thanks a lot for the help. Also, thanks to @konieshadow too.

Now I don't know if I can ask it here but, is there a way to have an event that triggers the end of a generation or straight up get the consoles debug infos in realtime ? Or I will have to manually query the queue status to have that info ?

mrhan1993 commented 8 months ago

@JubileusDeus If I understand correctly, I'm afraid so, you'll need to manually check to obtain real-time progress information.

JubileusDeus commented 8 months ago

@mrhan1993 understood. Thanks again for the help !