issues
search
chflame163
/
ComfyUI_OmniGen_Wrapper
ComfyUI custom node of [OmniGen] project.
MIT License
121
stars
4
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
Phi3Transformer does not support an attention implementation through torch.nn.functional.scaled_dot_product_attention yet. Please request the support for this architecture: https://github.com/huggingface/transformers/issues/28005. If you believe this error is a bug, please open an issue in Transformers GitHub repository and load your model with the argument `attn_implementation="eager"` meanwhile. Example: `model = AutoModel.from_pretrained("openai/whisper-tiny", attn_implementation="eager")`
#11
flyricci
opened
1 week ago
0
i get this message : dzOmniGenWrapper OmniGen.from_pretrained() got an unexpected keyword argument 'quantize'
#10
ABIA2024
opened
2 weeks ago
0
[Feature Request] Cache to disk instead of only VRAM
#9
guzuligo
opened
2 weeks ago
0
A couple of sugestions
#8
set-soft
closed
1 week ago
4
开低显存模式下报错
#7
BecarefulW
opened
3 weeks ago
4
能把使用torch编译功能吗?
#6
WOAI704
opened
3 weeks ago
0
pip requirements.txt error,Please guide, thank you.
#5
hzeasy
opened
3 weeks ago
3
total images must be the same as the number of image tags, got 1 image tags and 2 images
#4
cardenluo
opened
3 weeks ago
4
'FrozenDict' object has no attribute 'shift_factor'
#3
miggosec
opened
3 weeks ago
3
OmniGen.from_pretrained() got an unexpected keyword argument 'quantize'
#2
TAYLENHE
opened
3 weeks ago
1
报错..
#1
msola-ht
opened
3 weeks ago
4