Open s-smits opened 2 weeks ago
Hi @s-smits,
Thanks for the request! I'll try to support customized prompts in the next version. For now, please:
task_prompt = tokenizer.apply_chat_template(
[
{"role": "user", "content": task_prompt},
],
tokenize=False,
add_generation_prompt=True
)
I assume you want your model to output the reasoning steps w/o response prefilling.
pip install -e .
to install the customized repo.Cheers
Thank you for the addition and guide. Does this split the markdown part from the total response correctly, even though there are a lot of xml tags?
I'm not sure if models will understand the prompt correctly due to their capabilities. However, please do remember to state that the completed code snippet should be returned in a markdown block.
In the case of the previous Reflection model, the code sanitization still worked as expected.
Hi,
How would I add a custom prompt? with {{question}} or something to add code in between. I want to test this prompt