Faster workflows for ComfyUI users on Mac with Apple silicon
Install the MLX nodes from the Custom Nodes Manager:
OR
Given the following environment:
Device: MacBook M2 Max, 96 GB
Model: Flux 1.0 dev (not quantized)
Size: 512x512
Prompt: Photo of a cat
Steps: 10
I get approximatively:
70% faster when the model needs to be loaded
35% faster when the model is loaded
30% lower memory usage
A basic workflow is provided to help you start experimenting with the nodes here.
I started building these nodes because image generation from Flux models was taking too much time on my MacBook. After discovering DiffusionKit on X, which showcased great performance for image generation on Apple Silicon, I decided to create a quick port of the library into ComfyUI.
The goal is to collaborate with other contributors to build a full suite of custom nodes optimized for Apple Silicon.
Additionally, we aim to minimize the reliance on torch to take full advantage of future MLX improvements and further enhance performance.
This will allow ComfyUI users on Mac with Apple Silicon to experience faster workflows.
Contributions are welcome! I'm open to best practices and suggestions and you’re encouraged to submit a Pull Request to improve the project. 🙏
ComfyUI MLX Nodes is released under the MIT License. See LICENSE for more details.
If you encounter any problems or have any questions, please open an issue in this repository.