ngxson / wllama

WebAssembly binding for llama.cpp - Enabling in-browser LLM inference
https://huggingface.co/spaces/ngxson/wllama
MIT License
441 stars 21 forks source link

llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect #131

Closed flatsiedatsie closed 2 weeks ago

flatsiedatsie commented 3 weeks ago

I've upgraded to the latest version of Wllama, and I'm doing an experiment with setting the seed value explicitly.

ngxson commented 3 weeks ago

The real (and always the first) question is: does it work with native llama.cpp?

flatsiedatsie commented 2 weeks ago

Good tip, thanks.

It turns out the cause was a typo. A missing & in the URL was causing cache_type_k to be set to f16seed=1.