heshengtao / comfyui_LLM_party

Dify in ComfyUI includes Omost,GPT-sovits, ChatTTS, and FLUX prompt nodes,access to Feishu,discord,and adapts to all llms with similar openai/gemini interfaces, such as o1,ollama, qwen, GLM, deepseek, moonshot,doubao. Adapted to local llms such as llama/ Peach-9B/qwen/GLM,Linkage neo4j KG,Implemented the function of graphRAG / RAG / html to img.
GNU Affero General Public License v3.0
674 stars 68 forks source link

ComfyUI update broke LLM Party #82

Open IsItDanOrAi opened 2 weeks ago

IsItDanOrAi commented 2 weeks ago

Updated ComfyUI and it broke LLM-Party. Anyone have any idea the cause?

Python 3.11.8 Win10 Pytorch 2.3.0+cu121 Torchvision 0.18.0+121 xformers 0.0.26.post1 numpy 1.26.4 pillow - n/a OpenCV 4.10 transformers 4.41.1 diffusers 0.30.1 Cuda 12.1

heshengtao commented 1 week ago

Since you didn't give me more information, I can't judge your problem. You can update the party again. Recently, I removed a lot of dependencies that are easy to report errors.

nerdyrodent commented 1 week ago

Had a couple of users report problems with new installs, the actual issue being this:

Installing llama-cpp-python...
Looking in indexes: https://jllllll.github.io/llama-cpp-python-cuBLAS-wheels/AVX2/cu124, https://pypi.ngc.nvidia.com
ERROR: Could not find a version that satisfies the requirement llama-cpp-python (from versions: none)
ERROR: No matching distribution found for llama-cpp-python

Comfy recently went from CUDA 12.1 to 12.4. I upgraded from 12.1 to 12.4 and so had no such issues myself, but after uninstalling llama-cpp-python and re-installing LLM party, I was able to re-create the issue (Looks like its llama_index?).

Note also that this error doesn't occur when you pip install -r requirements.txt, but instead at ComfyUI startup where it does some requirements checks.

This doesn't stop LLM party from loading, and I've not seen what doesn't work without it yet though :)

heshengtao commented 1 week ago

I set this dependency to be unnecessary. If it is not installed correctly, only the LVM loader will fail, and it will have no effect on other nodes. The reason for the error is that this dependency is not adapted to cuda 12.4.

IsItDanOrAi commented 1 week ago

Just an update. I wanted to say thank you to both of you.

Heshangtao. The node seems to install perfectly now. I uninstalled LLM-Party, and manually installed via Git Pull, as was mentioned, and it seems to work perfectly. The changes definitely helped, as I had tried this before. Thank you for the updates. Sorry if my original post didn't provide enough information to help locate the issue. But, I'm glad you were able to figure it out.

NerdyRodent. Just wanted to say thank you for the clarification on the error, and for all your videos. Great work, and invaluable to the community.

heshengtao commented 6 days ago

Now when installing llama cpp python, if you install the cuda124 version, it will automatically change to install the cuda122 version. I tested it and it should be solved. If there is no problem, I will close this issue in three days.