Open danilopeixoto opened 9 months ago
@danilopeixoto I've been thinking about having this in MLX LM recently. Any interest in sending a PR?
It might make to do it after we have a more manageable config (https://github.com/ml-explore/mlx-examples/pull/503) but that should be landed soon!
To be more concrete, I'm envisioning you just set the loss in the config. e.g. cross_entropy
or dpo
This would be an awesome addition to mlx_examples! 🔥
I'm very very excited for this! Don't have the technical expertise to implement the DPO directly but would love to help in other ways (config, code cleanup) if neccessary!
That makes MLX really useful for production not just a research tool!
+500 waiting for this
Wait for this, when will the DPO training be supported?
Introduce one Reinforcement Learning from Human Feedback (RLHF) example, such as Direct Preference Optimization (DPO) method.
Paper
Direct Preference Optimization: Your Language Model is Secretly a Reward Model
Notes
Direct Preference Optimization (DPO): A Simplified Explanation by João Lages
Implementation examples
Possible MLX implementation
Policy and reference log probabilities:
Loss:
Beta: The temperature parameter for the DPO loss is typically set in the range of 0.1 to 0.5. The reference model is ignored when
beta
equals 0.Label smoothing: This parameter represents the conservativeness for DPO loss, assuming that preferences are noisy and can be flipped with a probability of
label_smoothing
.