Closed TAYLENHE closed 2 weeks ago
I have the same issue - On both Omnigen nodes available. Not sure how to fix ?
What version of Python, PyTorch and CUDA are you using?
I found the fix:
Go to \ComfyUI_windows_portable\ComfyUI\models\LLM\OmniGen-v1 Open config.json Scroll to the very bottom change this: "_attn_implementation": "sdpa" to this: "_attn_implementation": "eager"
worked for me hope it helps
I've updated the custom node to support the eager attention implementation, so simply updating the custom node in comfyui-manager will resolve the issue.
This issue occurred due to using older versions of Python, PyTorch, or CUDA that don’t support the newer scaled_dot_product_attention (SDPA). With the eager implementation now in place, your setup should work without further issues.
For optimal performance and to fully benefit from SDPA, I recommend updating your Python, PyTorch, and CUDA versions when possible.
Thank you! Will update all. What version should they be ?
Phi3Transformer does not support an attention implementation through torch.nn.functional.scaled_dot_product_attention yet. Please request the support for this architecture: https://github.com/huggingface/transformers/issues/28005. If you believe this error is a bug, please open an issue in Transformers GitHub repository and load your model with the argument
attn_implementation="eager"
meanwhile. Example:model = AutoModel.from_pretrained("openai/whisper-tiny", attn_implementation="eager")
What is this problem?