StanfordSpezi / SpeziLLM

A module enabling the integration of Large Language Models (LLMs) with the Spezi Ecosystem
https://swiftpackageindex.com/StanfordSpezi/SpeziLLM/documentation
MIT License
127 stars 11 forks source link

CoreML Model Download and Execution Functionality #19

Open PSchmiedmayer opened 1 year ago

PSchmiedmayer commented 1 year ago

Problem

A lot of different models such as LLMs are large models, even when transformed into potentially on-device executable versions using CoreML. It is impractical to ship these models with a mobile application. Even when we download and abstract these models it requires some UI and progress indicators to communicated the implications to the user.

Solution

All building blocks to create a good integration into SpeziML are in place.

  1. Apple CoreML already provides the functionality to downloading and compiling a model on the user’s device: https://developer.apple.com/documentation/coreml/downloading_and_compiling_a_model_on_the_user_s_device
  2. Hugging Face hosts CoreML models in their data repository what we would need to download, e.g. Llama 2: https://huggingface.co/pcuenq/Llama-2-7b-chat-coreml
  3. We can use SwiftUI to create a nice downloading progress API that track the progress of downloading the model and making it ready to be executed.

Similar to #18, we should add some sort of abstraction layer to the API to enable a reuse across different models, maybe initially focusing on the Hugging Face and LLM use case. Testing this functionality is probably best done on a macOS machine. This might require some smaller changes to the framework.

Additional context

No response

Code of Conduct

philippzagar commented 7 months ago

Sadly, in its current state, CoreML is not optimized to run LLMs and therefore is way too slow for local LLM execution. Currently, SpeziLLM provides local inference functionality via llama.cpp, but that may change if Apple updates CoreML in this year's WWDC.