ngxson / wllama

WebAssembly binding for llama.cpp - Enabling on-browser LLM inference
https://huggingface.co/spaces/ngxson/wllama
MIT License
444 stars 23 forks source link

Add `downloadModel` function #95

Closed ngxson closed 3 months ago

ngxson commented 3 months ago

As the name suggested, this function allows user to download the model to cache, without loading it into memory.

The use case would be to allow application to have a "model manager" screen that allows: