huggingface / alignment-handbook

Robust recipes to align language models with human and AI preferences
https://huggingface.co/HuggingFaceH4
Apache License 2.0
4.28k stars 367 forks source link

What about the system prompt? #52

Open timothylimyl opened 8 months ago

timothylimyl commented 8 months ago

It seems that the system prompt is left to be \n or rather blank.

Inspecting UltraChat (https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k?row=5), seems that no system prompt is added to the dataset.

There must be something that I missed in regards to addition of system prompts to the dataset for training, especially since the officially deployed model is able to adhere to system prompt intent (like 'You are a pirate', etc)