Closed saul-jb closed 1 year ago
As far as I can see in the code the seed parameter only affects the --random-prompt
parameter. If that is the case then it should probably be mentioned as part of the help.
Is there any way to produce deterministic results?
Temperature, not seed controls the prompt token generation:
So just set temperature to zero. But many models produce odd results with --temp 0
, so set it to a very low number instead: --temp 0.001
I added a comment in -h
about the seed. :) (I think it can be used on some llama models, but current backend does not support that.)
So just set temperature to zero. But many models produce odd results with
--temp 0
, so set it to a very low number instead:--temp 0.001
Thanks, I needed to read up more on LLM parameters, it seems that --top_k 1
can also be used to restrict the model to only choosing the most likely result.
It appears the seed parameter doesn't work:
I can provide it with the same prompt (
What is a dog?
) on two different clean sessions and the response may be different. I would have expected the exact same result formodel
+template
+context
+seed
+prompt
combination.Have I missed something else that might cause different results or is seed not working?