ggerganov / llama.cpp

LLM inference in C/C++
MIT License
61.35k stars 8.77k forks source link

Batched inference with greedy sampling yields different completions #6583

Open mbonacci opened 2 months ago

mbonacci commented 2 months ago

Using batched.cpp example, modified to use greedy sampling, yields different completions (sample output below). I'm using Windows, llama.cpp compiled by w64devkit on laptop with RTX3070.

Correct me if I'm wrong, but sampling with greedy sampler (i.e. always picking the most likely next token) should yield same result for same prompt, always (for same model).

Can this be a result of model quantizaton (I'm using 6K quantized llama2-chat gguf and tried also with 8 bit)? Note: llama.cpp was compiled with no CUDA, so this is all on CPU.

batched ../models/TheBloke/Llama-2-7B-Chat-GGUF/llama-2-7b-chat.Q6_K.gguf  "Hello, my name is D" 4 50 0

sequence 0:

Hello, my name is Drew and I'm a 30-year-old man from the United States. I've been interested in Japanese culture for as long as I can remember, and I've been studying the language

sequence 1:

Hello, my name is Drew and I'm a 30-year-old man from the United States. I've been a fan of anime for as long as I can remember, and I've been lucky

sequence 2:

Hello, my name is Drew and I'm a 30-something year old man from the United States. I've been a fan of anime for as long as I can remember, and I've been lucky

sequence 3:

Hello, my name is Drew and I'm a 30-something year old man from the United States. I've been a fan of anime for as long as I can remember, and I've been lucky
ggerganov commented 2 months ago

This is an effect from using unified KV cache: https://github.com/ggerganov/whisper.cpp/issues/1941#issuecomment-1986923227

MichaelZhangBH commented 1 month ago

Hi, @ggerganov , I saw your comment here at #4130

In order to resolve these, I think we should add a standard attention implementation where each sequence has it's own KV cache buffer and the attention is computed separately. This way, users would be able to choose which implementation to use based on their specific use case.

Is there any plan for this implementation? Sometimes greedy generations with different outcome can be a trouble.

ggerganov commented 1 month ago

No plan at the moment on my side. Haven't figure out a good way to implement this yet

martindevans commented 2 weeks ago

I've been investigating the performance of models with batched inference. I had expected slightly different results based on the number of parallel sequences being evaluated (i.e. some small amount of random noise), but I have instead noticed a very distinct downward trend. i.e. more sequences leads to less accuracy on test set!

Is this expected?

Evaluating against the Google BoolQ dataset, vertical axis shows accuracy percentage (note it starts at 48%), horizontal axis shows number of sequences (each sequence answering an independent question):

accuracy vs parallel sequences

ggerganov commented 2 weeks ago

This is not expected

martindevans commented 2 weeks ago

Thanks for confirming that. I'll do some more digging into this to see if I can turn up anything more.

martindevans commented 2 weeks ago

I tried running the BoolQ dataset again, but this time asking each question in N parallel sequences.

As far as I can tell this always produces the same answer across all sequences. no matter how many parallel sequences I run (up to 64). There's some variance in accuracy with different sequence counts, but nothing as huge as before. This is not what I had expected! Here's what that looks like:

Accuracy vs Sequence Count

Note that when running this test I made sure that no tokens were shared between sequences in the prompt batch, so each sequence is totally independent.