huggingface / chat-ui

Open source codebase powering the HuggingChat app
https://huggingface.co/chat
Apache License 2.0
6.77k stars 948 forks source link

Prompt template for WizardLM-2-8x22B? #1183

Open Arche151 opened 1 month ago

Arche151 commented 1 month ago

What is the prompt template for WizardLM-2-8x22B in the .env.local?

When setting it to the default one: <s>{{#each messages}}{{#ifUser}}[INST] {{#if @first}}{{#if @root.preprompt}}{{@root.preprompt}}\n{{/if}}{{/if}}{{content}} [/INST]{{/ifUser}}{{#ifAssistant}}{{content}}</s>{{/ifAssistant}}{{/each}}

the generated output is very odd and incoherent.

When setting the prompt template to the one displayed in the model card: {system_prompt} USER: {prompt} ASSISTANT: </s>

the output gets even worse.

Can anyone help?

nsarrazin commented 1 month ago

Hi! So we recommend using the tokenizer to fetch the chat prompt template. Remove chatPromptTemplate from your config and set

"tokenizer": "alpindale/WizardLM-2-8x22B" in your model config

that should hopefully work

Arche151 commented 1 month ago

@nsarrazin Thanks for this solution, but it isn't optimal for me unfortunately, because then I have to enter my HF_Token and the ChatUI needs to connect with the internet, which is not ideal for my privacy requirements.

Is there a way to do this fully offline?