Open danch99 opened 7 months ago
You may want to take a look at the ComfyUI-CoreMLSuite custom node, which converts safetensor models to CoreML models to be used in ConfyUI.
Over at aszc-dev/ComfyUI-CoreMLSuite have a similar issue: https://github.com/aszc-dev/ComfyUI-CoreMLSuite/issues/34
Apparently if we substitute
import torchsde
with
import mlx
(and all references in this file to torchsde
of course)
in comfy/k_diffusion/sampling.py
it would work.
I am seeing a 60% decrease once mlx is being used.
Thanks for your answers. I have to admit I am a bit lost. I tried to convert a safetensor model using CoreMLsuite, and I had to stop it after several hours. I don't even know if it was doing something. I tried to change torchsde to mlx, in the sampling.py file, and then comfyUI doesn't start any more.
Is there a tuto somewhere I can follow ?
Would you be willing giving more information to help me succeed using it. Please.
Thanks
Hello everyone,
I've just coded some MLX-based custom nodes that port DiffusionKit into ComfyUI. Currently, these are basic nodes for text-to-image workflows, but let me know which workflows you'd like to optimize.
I'd be happy to help! 😊
ComfyUI-MLX: https://github.com/thoddnn/ComfyUI-MLX DiffusionKit: https://github.com/argmaxinc/DiffusionKit
Hey ComfyUI Team,
Quick question - any plans to bring the mlx library into the mix for ComfyUI, especially with Apple Silicon in mind? It seems like it could really boost performance for those of us on the newer Macs.
Would love to see it happen. What do you think?
Cheers!
Dan