second-state / WasmEdge-WASINN-examples

Apache License 2.0
217 stars 35 forks source link

qwen1_5-14b-chat-q5_k_m.gguf is not working #116

Closed njalan closed 1 month ago

njalan commented 2 months ago

In my testing qwen1_5-14b-chat-q5_k_m.gguf is working with llama.cpp but not work with wasmedge & llama-chat.wasm.

Here is my command: wasmedge --dir .:. --nn-preload default:GGML:CPU:qwen1_5-14b-chat-q5_k_m.gguf llama-chat.wasm -c 2048 -n 512 -g 0 there is nothing output

image
njalan commented 1 month ago

@hydai Can you please help to look at this?

dm4 commented 1 month ago

Qwen is using the ChatML template format for prompts; you need to add the -p chatml argument to your command. It should work out-of-the-box using the latest release of the GGML plugin.

qwen

Additionally, llama-chat.wasm originates from LlamaEdge/chat; perhaps you should consider raising an issue here as a first step next time.

njalan commented 1 month ago

@dm4 Thanks for your help