ggerganov / llama.cpp

LLM inference in C/C++
MIT License
64.65k stars 9.26k forks source link

Random seed possible problems. #8593

Open 0wwafa opened 1 month ago

0wwafa commented 1 month ago

I ran llama.cpp (latest version) with these parameters:

prompt="""
Tell me a long story.
"""

llama-cli --seed 1721414715 -c 4096 -m /content/$m -t $(nproc) -ngl 999 -p "User: Hi\nBot:Hi\nUser: {prompt}\nBot:"

and in the log I read the seed was: 1721414715

so at the next run I used --seed 1721414715 but the story was a different one.

why?

0wwafa commented 1 month ago

the second time I ran llama.cpp with the same seed it told me the same story.

so I don't understand why, when I did not specify the seed, the log shown the seed main: seed = 1721414715

and when I entered it manually instead told me a different story,

then run again with the same seed manually, it told the same story.

I see 2 possibilities: 1) when not specified, the seed is shown "wrong" 2) when entered manually the seed is interpreted differently.

Rotatingxenomorph commented 1 month ago

The CUDA version introduces some randomness even with the same seed.

0wwafa commented 1 month ago

The CUDA version introduces some randomness even with the same seed.

I am using CPU ONLY.

Rotatingxenomorph commented 1 month ago

The CUDA version introduces some randomness even with the same seed.

I am using CPU ONLY.

Why the -ngl 999 then?

compilade commented 1 month ago

I see 2 possibilities:

  1. when not specified, the seed is shown "wrong"
  2. when entered manually the seed is interpreted differently.

This is weird because both of these possibilities don't seem to be what's happening, which means it might be hard to debug.

https://github.com/ggerganov/llama.cpp/blob/87e397d00bdcedd5cbf6dfda06a7b0f302462728/examples/main/main.cpp#L188-L194

then run again with the same seed manually, it told the same story.

This rules out non-determinism of the backend.

EDIT: I can also reproduce this problem on my machine (with CPU-only inference). It's a very weird behavior.

compilade commented 1 month ago

AHA! The sampling seed in params.sparams.seed is set by --seed, but not when choosing a default seed in main.cpp.

This seems to fix it:

diff --git a/examples/main/main.cpp b/examples/main/main.cpp
index a0d817b1..ceed4ce5 100644
--- a/examples/main/main.cpp
+++ b/examples/main/main.cpp
@@ -187,6 +187,7 @@ int main(int argc, char ** argv) {

     if (params.seed == LLAMA_DEFAULT_SEED) {
         params.seed = time(NULL);
+        sparams.seed = params.seed;
     }

     LOG_TEE("%s: seed  = %u\n", __func__, params.seed);

I see 2 possibilities:

  1. when not specified, the seed is shown "wrong"
  2. when entered manually the seed is interpreted differently.

It seems like BOTH of theses guesses were true after all.

JohannesGaessler commented 1 month ago

The CUDA version introduces some randomness even with the same seed.

The CUDA backend is deterministic as in the results for the same input parameters will have the same output logits. However, if you use >1 slots or prompt caching on the server then the input parameters can vary and thus the outputs will vary too.

Rotatingxenomorph commented 1 month ago

The CUDA backend is deterministic as in the results for the same input parameters will have the same output logits. However, if you use >1 slots or prompt caching on the server then the input parameters can vary and thus the outputs will vary too.

That's good to learn! Thank you.

0wwafa commented 1 month ago

@compilade

It seems like BOTH of theses guesses were true after all. :D so what was the seed when not specified? 0?

compilade commented 1 month ago

so what was the seed when not specified? 0?

When not specified, the sampling seed is random.

https://github.com/ggerganov/llama.cpp/blob/22f281aa16f44d8f6ec2c180a0685ff27e04e714/common/sampling.cpp#L82

0wwafa commented 1 month ago

so what was the seed when not specified? 0?

When not specified, the sampling seed is random.

https://github.com/ggerganov/llama.cpp/blob/22f281aa16f44d8f6ec2c180a0685ff27e04e714/common/sampling.cpp#L82

@compilade so.. I don't understand: what was happening before? why the seed printed when it was random didn't work?

AHA! The sampling seed in params.sparams.seed is set by --seed, but not when choosing a default seed in main.cpp.

so why did it work the second time? luck?

SharifIsmail commented 1 month ago

@JohannesGaessler

The CUDA backend is deterministic as in the results for the same input parameters will have the same output logits. However, if you use >1 slots or prompt caching on the server then the input parameters can vary and thus the outputs will vary too

I tried to figure out why using >1 slot does not produce deterministic results when doing parallel requests. Do you know why it is not possible to get deterministic output when making parallel requests?

JohannesGaessler commented 1 month ago

Because floating point arithmetic is not commutative. You only get bit-for-bit identical results if you do the exact same operations in the exact same order. But the whole reason why >1 slots is faster is that you do not do that but instead change the kernels depending on how many slots are currently in use. Also the positions of individual sequences within the unified KV cache will be different.

compilade commented 1 month ago

I tried to figure out why using >1 slot does not produce deterministic results when doing parallel requests. Do you know why it is not possible to get deterministic output when making parallel requests?

See also https://github.com/ggerganov/whisper.cpp/issues/1941#issuecomment-1986923227.

But when the order is exactly the same, the output between runs can still be exactly the same, even with parallel sequences, as I've seen in https://github.com/ggerganov/llama.cpp/pull/6122#discussion_r1531405574.

SharifIsmail commented 1 month ago

I see. Thanks @compilade @JohannesGaessler

So, running higher-precision models with a higher-precision KV cache would alleviate this effect, right?

JohannesGaessler commented 1 month ago

No, even with 16 bit precision you will still run into this issue because the condition numbers of the weight matrices can be arbitrarily large.

SharifIsmail commented 1 month ago

I did some quick tests for the sake of curiosity with "Phi-3-mini-4k-instruct-fp16.gguf" vs "Phi-3-mini-4k-instruct-q4.gguf".

Bottom Line: As you stated, JohannesGaessler, both are nondeterministic for the vast majority of cases. Even with cherry-picked settings attempting to minimize non-determinism (i.e., "-b 1 -ub 1 -nocb" with cache_prompt=false), I only managed to get a few prompts on the fp16 model to return deterministic output. I used "-np 10", i.e. 10 slots and 10 parallel requests.

yaleeyang commented 4 days ago

The CUDA version introduces some randomness even with the same seed.

The CUDA backend is deterministic as in the results for the same input parameters will have the same output logits. However, if you use >1 slots or prompt caching on the server then the input parameters can vary and thus the outputs will vary too.

Hey Johannes, is there any test cases for CUDA bit-exact determinism for the project?

JohannesGaessler commented 4 days ago

There are multiple in the server tests. But they're commented out since they're failing on master.