Closed ziyanxzy closed 5 days ago
@ziyanxzy there is a Jupiter Notebook prepared by @eaidova on how to check the model with OpenVINO https://openvinotoolkit.github.io/openvino_notebooks/?search=Image+generation+with+Flux.1+and+OpenVINO
Could you please follow the steps?
sure, but does this enisum ops be supported by eaidova in this case??
@ziyanxzy in notebook, I a little bit reworked place where einsum called in model by patching making it more openvino friendly, if you mean that under supported, then yes
thank you , but I try pipepine on cpu successfully. but failed at gpu path. if i compile with GPU(intel igpu), it will output all black picture.
@vladimir-paramuzov could you please comment on the GPU accuracy? maybe we should set FP32 inference precision?
@ziyanxzy Please try to set fp32 inference precision for all models in the pipeline first - usually black picture on the output means that fp16 overflow happened somewhere. If pipeline works with fp32 precision correctly, then you can try to run models with f16 one by one. The issue often happens for VAE or text encoders, so running those if fp32 precision won't have big impact on the total pipeline perf.
Closing this, I hope previous responses were sufficient to help you proceed and solve the issue. Feel free to reopen and ask additional questions related to this topic.
OpenVINO Version
2024.4.0-16311-2c8fd1e6e97
Operating System
Windows System
Device used for inference
CPU
Framework
PyTorch
Model used
flux
Issue description
convert flux takes a lot of time, and it failed at Einsum.
Step-by-step reproduction
convert flux model :https://hf-mirror.com/black-forest-labs/FLUX.1-schnell
Relevant log output
No response
Issue submission checklist