Closed qdrddr closed 5 months ago
This is about running LLMs locally on Apple Silicone. Core ML is a framework that can redistribute workload across CPU, GPU & Nural Engine (ANE). ANE is available on all modern Apple Devices: iPhones & Macs (A14 or newer and M1 or newer). Ideally, we want to run LLMs on ANE only as it has optimizations for running ML tasks compared to GPU. Apple claims "deploying your Transformer models on Apple devices with an A14 or newer and M1 or newer chip to achieve up to 10 times faster and 14 times lower peak memory consumption compared to baseline implementations".
https://machinelearning.apple.com/research/neural-engine-transformers
Work in progress on CoreML implementation for [whisper.cpp]. They see x3 performance improvements for some models. (https://github.com/ggerganov/whisper.cpp/discussions/548) you might be interested in.
Here is another implementation Swift Transformers you might also be interested in. Example of CoreML application https://github.com/huggingface/swift-chat
Bumping this issue because it has been open for 7 days with no activity. Closing automatically in 7 days unless it becomes active again.
Hope it's not closed/ forgotten
Bumping this issue because it has been open for 7 days with no activity. Closing automatically in 7 days unless it becomes active again.
Closing due to inactivity.
Is your feature request related to a problem? Please describe.
Please consider adding Core ML model package format support to utilize Apple Silicone Nural Engine + GPU.
Describe the solution you'd like Utilize both ANE & GPU, not just GPU on Apple Silicon
Describe alternatives you've considered Currently can use only GPU
Additional context List of Core ML package format models
https://github.com/likedan/Awesome-CoreML-Models