Open TimYao18 opened 1 year ago
Here the Mochi Diffusion happen the same issue.
From your Mochi screen cap, it looks like you are not using a Scribble ControlNet model that was converted for use with Split-Einsum (unless you renamed it).
With Split-Einsum and CPU and GPU, you don't really need a ControlNet model converted specifically for Split-Einsum. An Original that is 512x512 (5x5) will work.
But with CPU and Neural Engine, at least in Mochi, you must use a ControlNet model converted for Split-Einsum. The list of available models for download uses -SE for Split-Einsum versions.
YES, I tried with SE on Mochi Diffusion works fine. So this might still the python CLI or Swift CLI issue? I will continue to try this.
It is very possible that the behavior is different between Mochi and a Swift / Python-CoreML CLI. Mochi was built using an ml-stable-diffusion commit that is different (older) from the current version you would get from cloning the ml-stable-diffusion git repo today. And there also seemed to be a divergence mentioned in the PR pending for the SDXL refiner between the Swift CLI and the Diffusers app, in some instance, I think, which would imply a bug somewhere as the 2 should behave the same.
(Mochi 4.2 was built with this commit: https://github.com/apple/ml-stable-diffusion/commit/ce8ee78e28613d8a2e4c8b56932b236cb57e7e20)
From the Refiner PR #227:
"I noticed that trying to run inference on the CLI wasn't working quite right, and I figured out that it needed the Unet to be float32 precision to work. I'm not sure why this happens, and the refiner works perfectly fine when running through the Diffusers app."
Thank you for your notice. I fallback my diffusers project to older version and the controlnet works great with NerualEngine.
Yes, some thing or things that changed in the past month or so in ml-stable-diffusion, coremltools, and/or diffusers is causing model mis-matches all over the place. With all the changes, I have been unable to isolate the source of the issues. Different pipelines throw different errors at different points in different packages. It is near impossible to tell now whether an error is from the conversion pipeline or the inference pipeline.
So I gave up trying to make sense of things. I am just sticking with older packages and conda environments that work consistently for 1.5 type models. I will likely stay there until Sonoma is released and stable, and the SDXL changes are debugged.
Hi, I met error when using python cli generate images on my M1 Macbook Air. When computeUnit=CPU_AND_GPU + controlNet, it works fine. When computeUnit=CPU_AND_NE + controlNet, it errors as below log. When computeUnit=CPU_AND_NE no controlNet, it works fine.
The same command and model files on other Macbook (M2, M2 Pro), it works fine. I understand that this is a specific scenario, but I still feel it's necessary to report it.
Also, I tried the Swift-CLI using OpenPose-SE with DreamShaper_v5_split-einsum_cn Core ML model converted by jrrjrr. Using Macbook air M1 will still get error.
I tried Mochi Diffusion and it got the same error message as above. I will have a screen capture later.