ngxson / wllama

WebAssembly binding for llama.cpp - Enabling on-browser LLM inference
https://huggingface.co/spaces/ngxson/wllama
MIT License
444 stars 23 forks source link

Add download progress callback #13

Closed ngxson closed 6 months ago

ngxson commented 6 months ago

Supersed #8

Usage:

await wllama.loadModelFromUrl(MODEL_SPLITS, {
  embeddings: true,
  n_ctx: 1024,
  parallelDownloads: 5,
  progressCallback: ({ loaded, total }) => console.log(`Downloading... ${Math.round(loaded/total*100)}%`),
});

Please note that this also works with multiple splits (the reported progress will be aggregated, that means it reports the global progress across all files)