Open JorgeR81 opened 2 months ago
I can run Flux, but it's slow.
I don't have experience using LLM's locally. It's there a minimum requirement to run this LLM ?
I wouldn't mind running only the LLM, in a separate workflow, just to get the prompts.
I have:
Should be fine, the lowest end machine I tried it on only has 6GB of VRam and could run both the LLM node and a simple Flux image generation in the same workflow with the Q4 GGUF models.
I can run Flux, but it's slow.
I don't have experience using LLM's locally. It's there a minimum requirement to run this LLM ?
I wouldn't mind running only the LLM, in a separate workflow, just to get the prompts.
I have: