Closed set-soft closed 1 week ago
Another suggestion: Look at this fork: https://github.com/newgenai79/OmniGen/ It uses much less memory than you are using and I got very good results testing the demos.
Thank you for your sincere suggestion. I will re-examine the code when I have a free time :)
Hi @chflame163 ! I implemented some extra stuff (including progress and preview) and separated the parts to take advantage of the ComfyUI memory management. My versions of the node are here: https://github.com/set-soft/ComfyUI_OmniGen_Nodes/
My versions of the node are here: https://github.com/set-soft/ComfyUI_OmniGen_Nodes/
Excellent job!
First of all thanks for creating this wrapper!
It would be nice if you add a mention in the README about where the model is stored (models/OmniGen/Shitao/OmniGen-v1). I don't like when a workflow gets stucks for minutes downloading a model in the limbo.
You could also add a link to: https://huggingface.co/silveroxides/OmniGen-V1/tree/main This repo has an FP8 version of the model, so you can just download all the files (excluding the model.safetensors file) and just rename the FP8 file as model.safetensors. Then you can just use dtype=default and avoid the 8 bits conversion. This skips the conversion and save 12 GB of disk space.
Also: it would be nice to implement progress.
Another thing: Please think about decoupling the model load from the main node. I wonder if this model can be quantized to Q4_K_S and loaded using GGUF, this is better than FP8 for models like Flux.
Oh! BTW: the VAE could be also decoupled.