redotvideo / haven

LLM fine-tuning and eval
https://haven.run
Apache License 2.0
341 stars 11 forks source link

is there an error in the way the prompt is builded ? #86

Open tomad02 opened 1 year ago

tomad02 commented 1 year ago

according to this https://gpus.llm-utils.org/llama-2-prompt-template/ [INST] should be before system tag but in your code is otherwise: https://github.com/havenhq/haven/blob/5ac243a7805eb4588184a34546031adde651ae63/llamatune/llamatune/data/chat_data_module.py#L98

mano3-1 commented 11 months ago

Hi @tomad02, I am currently seeking information on the same topic. Could you kindly share any insights or conclusions you may have reached regarding this matter?