Open sebastianschramm opened 9 months ago
The learning rate default in the dpo recipe config is set to 5e-7 and https://huggingface.co/Intel/neural-chat-7b-v3 was trained with a learning rate of 1e-4 (using of course a different data set https://huggingface.co/datasets/Open-Orca/SlimOrca).
However, I am wondering about the significant difference in lr and yet both models seem to perform well. Any insights about that, that you can share? Thank you
The learning rate default in the dpo recipe config is set to 5e-7 and https://huggingface.co/Intel/neural-chat-7b-v3 was trained with a learning rate of 1e-4 (using of course a different data set https://huggingface.co/datasets/Open-Orca/SlimOrca).
However, I am wondering about the significant difference in lr and yet both models seem to perform well. Any insights about that, that you can share? Thank you