Open Iory1998 opened 1 month ago
-_-
Yeah, even if you install it from llama-cpp-python library with CMAKE ARGS 3090 has this problem, i could not find a solution, i have 3090 too..
I currently have two installations of ComfyUI, on the older one the GPU acceleration works, on the new one it does not. I did some research and noticed how the defect is not directly on Comfy but on some dependency, find where the real problem is though. I am also using a 3090, but my experience excludes a HW problem.
Llama.cpp seems to use CPU instead of the GPU (RTX3090) which makes the process very slow, No matter the number of GPU layers I set, the model will always be offloaded to the CPU. Also, It's seems that BLAS is activated but not used?