mrhan1993 / Fooocus-API

FastAPI powered API for Fooocus
GNU General Public License v3.0
508 stars 135 forks source link

PLEASE HELP #241

Closed Manikandan192 closed 3 months ago

Manikandan192 commented 3 months ago

[Fooocus-API] Task queue size: 100, queue history size: 0, webhook url: None Preload pipeline Exception in thread Thread-2 (preplaod_pipeline): Traceback (most recent call last): File "C:\Users\manik\AppData\Local\Programs\Python\Python310\lib\threading.py", line 1016, in _bootstrap_inner self.run() File "C:\Users\manik\AppData\Local\Programs\Python\Python310\lib\threading.py", line 953, in run self._target(*self._args, *self._kwargs) File "C:\Project\Project\test\Fooocus-API\main.py", line 393, in preplaod_pipeline import modules.defaultpipeline as File "C:\Project\Project\test\Fooocus-API\repositories\Fooocus\modules\default_pipeline.py", line 1, in import modules.core as core File "C:\Project\Project\test\Fooocus-API\repositories\Fooocus\modules\core.py", line 1, in from modules.patch import patch_all File "C:\Project\Project\test\Fooocus-API\repositories\Fooocus\modules\patch.py", line 5, in import ldm_patched.modules.model_base File "C:\Project\Project\test\Fooocus-API\repositories\Fooocus\ldm_patched\modules\model_base.py", line 2, in from ldm_patched.ldm.modules.diffusionmodules.openaimodel import UNetModel File "C:\Project\Project\test\Fooocus-API\repositories\Fooocus\ldm_patched\ldm\modules\diffusionmodules\openaimodel.py", line 18, in from ..attention import SpatialTransformer, SpatialVideoTransformer, default File "C:\Project\Project\test\Fooocus-API\repositories\Fooocus\ldm_patched\ldm\modules\attention.py", line 12, in from .sub_quadratic_attention import efficient_dot_product_attention File "C:\Project\Project\test\Fooocus-API\repositories\Fooocus\ldm_patched\ldm\modules\sub_quadratic_attention.py", line 27, in from ldm_patched.modules import model_management File "C:\Project\Project\test\Fooocus-API\repositories\Fooocus\ldm_patched\modules\model_management.py", line 118, in total_vram = get_total_memory(get_torch_device()) / (1024 1024) File "C:\Project\Project\test\Fooocus-API\repositories\Fooocus\ldm_patched\modules\model_management.py", line 87, in get_torch_device return torch.device(torch.cuda.current_device()) File "C:\Users\manik\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\cuda__init__.py", line 787, in current_device _lazy_init() File "C:\Users\manik\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\cuda__init__.py", line 293, in _lazy_init raise AssertionError("Torch not compiled with CUDA enabled") AssertionError: Torch not compiled with CUDA enabled INFO: Started server process [10056] INFO: Waiting for application startup. INFO: Application startup complete. INFO: Uvicorn running on http://127.0.0.1:8888 (Press CTRL+C to quit)

alphaloop-vincent commented 3 months ago

For me, installing the packages manually solved the issue. See the readme.md for this: https://github.com/mrhan1993/Fooocus-API?tab=readme-ov-file#predownload-and-install

Once you have the torch packages installed you can open a python shell and: import torch torch.cuda.is_available() if the output is True, youre good to go

If the manual install does not fix your issue, it might not be package related but rather driver related, check if your driver comes with cuda support.