LLM Agent Framework in ComfyUI includes Omost,GPT-sovits, ChatTTS,GOT-OCR2.0, and FLUX prompt nodes,access to Feishu,discord,and adapts to all llms with similar openai/gemini interfaces, such as o1,ollama, grok, qwen, GLM, deepseek, moonshot,doubao. Adapted to local llms, vlm, gguf such as llama-3.2, Linkage neo4j KG, graphRAG / RAG / html 2 img
GNU Affero General Public License v3.0
1.06k
stars
94
forks
source link
LLM_local: Separate `device` and `dtype` in the node #13
Can we also separate device and tensor type(dtype)?
This is screen from the
SUPIR
node:Otherwise you will have to make a huge list like:
cuda-fp32
cuda-bf16
cuda-fp16
cuda-fp8
mps-fp32
mps-bf16
mps-fp16
and so on probably for other Hardware Accelerators(like
xpu
) too.Selecting separatly device and DType will be the best option, imho.
Also usually Node should use that device that is used by ComfyUI - maybe we can add an "auto" option for device and set it as default one.