Closed h0rv closed 3 months ago
low temperature doesnt reduce hallucinations. if t=0 would have resulted in zero hallucinations, people would 100% of the time use t=0
In any case, due to different motivation (trying to support seed), I have implemented temperature as a parameter config https://github.com/Codium-ai/pr-agent/pull/1063
Awesome, thank you!
Currently, for example, model
temperature
is hardcoded throughout the codebase to0.2
.This is a reasonable temperature to have good responses with less hallucinations, however, now with open source models becoming more capable (Llama 3 405b), like prompting, model hyperparameters do not translate 1-1 from model to model.
So, I think it would be a good change to make the temperature, and potentially other parameters, configurable in the
configuration.toml
.