-
What is the prompt template for `WizardLM-2-8x22B` in the `.env.local`?
When setting it to the default one: `{{#each messages}}{{#ifUser}}[INST] {{#if @first}}{{#if @root.preprompt}}{{@root.preprom…
-
WizardLM-2-7B is a very talkative chatbot... give extensive answers.
Unfortunately I didn't find any way to make it sto before being cut of (whay does that happen?)
Any idea?
-
It seems that WizardLM has deleted all of their models. I'm posting here a few of the mirrors of these models. Feel free to add more mirrors as you find them -- you can copy the text of this post if y…
-
I am presenting these two issues as one, but I don't know if they are related.
I am interested in using OntoGPT with a HuggingFace model remotely i.e. through the API, without downloading the model…
-
SFT-8 training is using SFT-8 training is using [a slightly less cleaned version](https://huggingface.co/datasets/ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered)
Beyond SFT-8 we should repl…
-
Please let us know what model architectures you would like to be added!
**Up to date todo list below. Please feel free to contribute any model, a PR without device mapping, ISQ, etc. will still be …
-
in the call below, I've specified a model that does not exist- "a_model-that-does-not-exist"
The call still succeeds, but uses the model "Mistral Instruct" instead.
The chat client works fine with…
-
ykhli updated
11 months ago
-
When multiple requests are processed, the first request is interrupted. How to solve this problem?
My run command is as follows:
python3 -m llama_cpp.server --model ./models/WizardLM-13B-V1.2/gg…
-
请求添加llama3 wizardlm等24年4-5月大模型的测试结果。
当前的leaderboard榜单里的大模型感觉有点过时了,请问贵团队有计划测试24年最新的一批大模型吗?