Open awni opened 5 days ago
Would be huge to see this, M2 Ultras would be great considering the huge amount of URAM
The M4 on my iPad is a monster. It would be great to leverage that power using an iPad M4 farm...
Running on MLX is a very approachable for ML researcher, please consider support it
I'd love to see it happen. Apple Silicon's massive memory is extremely suitable for this kind of project.
That would be terrific to see. And I think it would really jumpstart usage of the model. Please consider supporting!
This would be incredible! 192GB on M2 Ultra makes Apple Silicon + MLX a real AI Powerhouse!
Any interest in doing an MLX back-end so we can run this efficiently on Apple silicon?
MLX docs