ggerganov / llama.cpp

LLM inference in C/C++
MIT License
64.24k stars 9.19k forks source link

Feature Request: change model and lora from server api #7635

Closed stygmate closed 4 days ago

stygmate commented 3 months ago

Prerequisites

Feature Description

Drive the loading of models from server api (like ollama) can be useful. More useful can be changing loras on the fly.

Motivation

Loras are small and switching between multiple finetunes can be very useful in complex application running on small computers.

Possible Implementation

No response

perk11 commented 1 month ago

I can support his use case, I want to be able to run multiple fairly rarely used models on the same hardware, but not being able to unload them quickly uses all available VRAM. If it would've been possible to specify the model to use in the API, that would've been a huge help.

github-actions[bot] commented 4 days ago

This issue was closed because it has been inactive for 14 days since being marked as stale.