huchenlei / ComfyUI_omost

ComfyUI implementation of Omost
Apache License 2.0
338 stars 21 forks source link

More Inference options #15

Open zdaar opened 4 weeks ago

zdaar commented 4 weeks ago

Hello,

Would you consider adding more inference options for the LLM ? I/E select the GPU to run the llm inference on if multiple, or even CPU inference with gguf quants ?

edit: it already does cpu inference but seems like its dynamic based on the available vram, or if high vram is set to false