preternatural-explore / mlx-swift-chat

A multi-platform SwiftUI frontend for running local LLMs with Apple's MLX framework.
MIT License
361 stars 22 forks source link
ios llm-inference macos mlx mlx-swift swiftui

MLX Swift Chat: Run LLM models locally with MLX!

https://github.com/PreternaturalAI/mlx-swift-chat/assets/8635253/f20862f3-8cab-4803-ba6e-44108b075c9b

Run LLM models locally with MLX!

MLX is an efficient machine learning framework specifically designed for Apple silicon (i.e. your laptop!)

@awnihannun

This project is a fully native SwiftUI app that allows you to run local LLMs (e.g. Llama, Mistral) on Apple silicon in real-time using MLX.

Installation

  1. Open the Xcode project.
  2. Go to Signing & Capabilities.
  3. Change the Team to your own team.
  4. Set the destination to My Mac.
  5. Click Run.

Support for iOS is coming next week.

Usage

  1. Click on Manage Models in the inspector view.
  2. Download and install a model (we recommend starting with Nous-Hermes-2-Mistral-7B-DPO-4bit-MLX).
  3. Go back to the inspector and select the downloaded model from the model picker.
  4. Wait for the model to load, the status bar will flash "Ready" once it is loaded.
  5. Click the run button.
Screenshot 2024-03-02 at 6 44 24 PM

Roadmap

Frequently Asked Questions

What models are currently supported?

Model Status
Mistral Supported
Llama Supported
Phi Supported
Gemma Supported (May have issues)

How do I add new models?

Models are downloaded from Hugging Face. To add a new model, visit the MLX Community on HuggingFace and search for the model you want, then add it via Manage ModelsAdd Model

[!IMPORTANT] Note that this project is still under active development and some models may require additional implementation to run correctly.

Is this suitable for production?

No. This is not intended for deploying into production.

What are the minimum hardware and software requirements?

Does this collect any data?

No. Everything is run locally on device.

What are the parameters?

Acknowledgements

Special thanks to Awni Hannun and David Koski for early testing and feedback

Much ❤️ to all the folks who made MLX (especially mlx-swift) possible!