Open cchance27 opened 11 months ago
Just a tip, don't use SDXL Turbo at 1024x1024 lol it really creates horror images
Just tried it and it works indeed. Too bad the quality is visibly worse when compared side by side with .safetensors (images are blurry and colors are slightly faded). Original: Converted to Core ML:
And since we can't use ANE, the speed is the same as with MPS (at least on my machine). I don't know if this is the support you meant 😄 I'll see if there's any way to improve it, but it will probably require rewriting the SDXL part itself to run on NE, and I don't know yet if it's even possible.
This isn't likely due to SDXL specifically, the fact SDXL doesn't work with ANE< and the fact theirs no turbo model for 1.5 models is really annoying.
@aszc-dev I've been wondering what the source of this degradation or blurriness is...? Is it related to the coremltools model conversion of shapes?
There actually a bug in the ml-stable-diffusion repo there’s an active pr to fix the blurriness with sdxl but hasn’t been accepted looks like a similar issue
@rovo79 Sorry, still haven't figured it out. Keep in mind that for the reasons explained in ml-stable-diffusion's FAQ the output is expected to differ slightly. Perhaps there is something in the way the Turbo model works that enhances these differences. I would have to take a closer look to find the cause.
@cchance27 Yes, this looks very similar to the issue described here, if that's what you had in mind. No, this part is handled by Comfy and proper value is being used (double-checked to be sure). But perhaps it's something closely related, maybe there is some oversight in the conversion code, although that's unexpected since base SDXL doesn't have this issue.
I get errors trying to setup on my M3 pro
File "/Users/user/www/ComfyUI/custom_nodes/ComfyUI-CoreMLSuite/init.py", line 6, in
Cannot import /Users/user/www/ComfyUI/custom_nodes/ComfyUI-CoreMLSuite module for custom nodes: No module named 'coremltools'
Will SDXL Turbo support be possible i saw you got SDXL support working, i'm still reading up on Turbo, but the implementation details seem to point towards it being a different scheduler and some form of layer on top of SDXL as i read that you can pull turbo of of the base model and apply it to finetunes.
So does that mean we can technically just use it as a normal SDXL Model in the convertor and just need to use a different sampler to handle turbo noise correctly?
Edit: Seems that Turbo works fine using the standard comfy workflow, just using the Convertor + Adapter into the SamplerCustom, not sure if a SamplerDiscrete needs to be inlined or not, it doesn't seem to me a difference direct/eps/lcm set... but maybe i'm missing something.