Modify the likelihood of specified tokens appearing in the completion. Accepts a JSON object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token.
-> need to confirm llamacpp support this or not, but this might be nice to have feature.
✅ QA: Logit_bias 100 vs -100:
logit_bias = 100 => exclusive selection of the word "under"
logit_bias = 10 => increased selection of the word "under"
logit_bias = 0 => normal response
Problem logit_bias
map
Optional
Defaults to null.Modify the likelihood of specified tokens appearing in the completion. Accepts a JSON object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token.
-> need to confirm llamacpp support this or not, but this might be nice to have feature.
reference: https://platform.openai.com/docs/api-reference/chat/create#chat-create-logit_bias
related issue: https://github.com/janhq/internal/issues/160