cocktailpeanut / comfyui.pinokio

A 1-click launcher for https://github.com/comfyanonymous/ComfyUI
31 stars 8 forks source link

Torch not compiled with CUDA enabled - Nvidia - Linux #6

Open allenhs opened 11 months ago

allenhs commented 11 months ago

I get the following error when trying to start a fresh install on Fedora linux with an Nvidia gpu (4090)

(env) (base) [allen@pandora ComfyUI]$ python3 main.py ** ComfyUI start up time: 2023-11-27 20:54:01.685928

Prestartup times for custom nodes: 0.0 seconds: /home/allen/pinokio/api/comfyui.pinokio.git/ComfyUI/custom_nodes/ComfyUI-Manager

Traceback (most recent call last): File "/home/allen/pinokio/api/comfyui.pinokio.git/ComfyUI/main.py", line 72, in import execution File "/home/allen/pinokio/api/comfyui.pinokio.git/ComfyUI/execution.py", line 12, in import nodes File "/home/allen/pinokio/api/comfyui.pinokio.git/ComfyUI/nodes.py", line 20, in import comfy.diffusers_load File "/home/allen/pinokio/api/comfyui.pinokio.git/ComfyUI/comfy/diffusers_load.py", line 4, in import comfy.sd File "/home/allen/pinokio/api/comfyui.pinokio.git/ComfyUI/comfy/sd.py", line 5, in from comfy import model_management File "/home/allen/pinokio/api/comfyui.pinokio.git/ComfyUI/comfy/model_management.py", line 114, in total_vram = get_total_memory(get_torch_device()) / (1024 * 1024) File "/home/allen/pinokio/api/comfyui.pinokio.git/ComfyUI/comfy/model_management.py", line 83, in get_torch_device return torch.device(torch.cuda.current_device()) File "/home/allen/pinokio/api/comfyui.pinokio.git/ComfyUI/env/lib/python3.10/site-packages/torch/cuda/init.py", line 769, in current_device _lazy_init() File "/home/allen/pinokio/api/comfyui.pinokio.git/ComfyUI/env/lib/python3.10/site-packages/torch/cuda/init.py", line 289, in _lazy_init raise AssertionError("Torch not compiled with CUDA enabled") AssertionError: Torch not compiled with CUDA enabled

frischka commented 5 months ago

Same problem here with ubuntu 22.04 and nvidia. Trying to use sdxl turbo.

Traceback (most recent call last): File "/home/david/pinokios/api/sdxl-turbo.git/env/lib/python3.10/site-packages/gradio/queueing.py", line 528, in process_events response = await route_utils.call_process_api( File "/home/david/pinokios/api/sdxl-turbo.git/env/lib/python3.10/site-packages/gradio/route_utils.py", line 270, in call_process_api output = await app.get_blocks().process_api( File "/home/david/pinokios/api/sdxl-turbo.git/env/lib/python3.10/site-packages/gradio/blocks.py", line 1908, in process_api result = await self.call_function( File "/home/david/pinokios/api/sdxl-turbo.git/env/lib/python3.10/site-packages/gradio/blocks.py", line 1485, in call_function prediction = await anyio.to_thread.run_sync( File "/home/david/pinokios/api/sdxl-turbo.git/env/lib/python3.10/site-packages/anyio/to_thread.py", line 56, in run_sync return await get_async_backend().run_sync_in_worker_thread( File "/home/david/pinokios/api/sdxl-turbo.git/env/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 2144, in run_sync_in_worker_thread return await future File "/home/david/pinokios/api/sdxl-turbo.git/env/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 851, in run result = context.run(func, args) File "/home/david/pinokios/api/sdxl-turbo.git/env/lib/python3.10/site-packages/gradio/utils.py", line 808, in wrapper response = f(args, *kwargs) File "/home/david/pinokios/api/sdxl-turbo.git/app.py", line 27, in run return pipes["txt2img"](prompt=prompt, num_inference_steps=1, guidance_scale=0.0).images[0] File "/home/david/pinokios/api/sdxl-turbo.git/env/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(args, **kwargs) File "/home/david/pinokios/api/sdxl-turbo.git/env/lib/python3.10/site-packages/diffusers/pipelines/stable_diffusion_xl/pipeline_stable_diffusion_xl.py", line 1054, in call ) = self.encode_prompt( File "/home/david/pinokios/api/sdxl-turbo.git/env/lib/python3.10/site-packages/diffusers/pipelines/stable_diffusion_xl/pipeline_stable_diffusion_xl.py", line 383, in encode_prompt prompt_embeds = text_encoder(text_input_ids.to(device), output_hidden_states=True) File "/home/david/pinokios/api/sdxl-turbo.git/env/lib/python3.10/site-packages/torch/cuda/init.py", line 284, in _lazy_init raise AssertionError("Torch not compiled with CUDA enabled") AssertionError: Torch not compiled with CUDA enabled

nvidia-smi: NVIDIA-SMI 535.171.04 Driver Version: 535.171.04 CUDA Version: 12.2

Kernel 6.5.0-35-generic