1038lab / ComfyUI-OmniGen

ComfyUI-OmniGen - A ComfyUI custom node implementation of OmniGen, a powerful text-to-image generation and editing model.
MIT License
120 stars 11 forks source link

What is this problem? #6

Closed TAYLENHE closed 2 weeks ago

TAYLENHE commented 2 weeks ago

Phi3Transformer does not support an attention implementation through torch.nn.functional.scaled_dot_product_attention yet. Please request the support for this architecture: https://github.com/huggingface/transformers/issues/28005. If you believe this error is a bug, please open an issue in Transformers GitHub repository and load your model with the argument attn_implementation="eager" meanwhile. Example: model = AutoModel.from_pretrained("openai/whisper-tiny", attn_implementation="eager")

What is this problem?

MaisonMeta commented 2 weeks ago

I have the same issue - On both Omnigen nodes available. Not sure how to fix ?

1038lab commented 2 weeks ago

What version of Python, PyTorch and CUDA are you using?

MaisonMeta commented 2 weeks ago

I found the fix:

Go to \ComfyUI_windows_portable\ComfyUI\models\LLM\OmniGen-v1 Open config.json Scroll to the very bottom change this: "_attn_implementation": "sdpa" to this: "_attn_implementation": "eager"

worked for me hope it helps

1038lab commented 2 weeks ago

I've updated the custom node to support the eager attention implementation, so simply updating the custom node in comfyui-manager will resolve the issue.

This issue occurred due to using older versions of Python, PyTorch, or CUDA that don’t support the newer scaled_dot_product_attention (SDPA). With the eager implementation now in place, your setup should work without further issues.

For optimal performance and to fully benefit from SDPA, I recommend updating your Python, PyTorch, and CUDA versions when possible.

MaisonMeta commented 2 weeks ago

Thank you! Will update all. What version should they be ?