alexrozanski / LlamaChat

Chat with your favourite LLaMA models in a native macOS app
https://llamachat.app
MIT License
1.43k stars 53 forks source link

Add Metal/GPU support for running model inference #30

Open singularitti opened 1 year ago

singularitti commented 1 year ago

I am no expert in this, but it seems to be running on CPUs, which could cause severe heat generation.

alexrozanski commented 1 year ago

@singularitti adding support for this in llama.swift to start with (see https://github.com/alexrozanski/llama.swift/pull/8). this will be coming to LlamaChat v2 which is still a WIP!