Open mbonacci opened 2 months ago
This is an effect from using unified KV cache: https://github.com/ggerganov/whisper.cpp/issues/1941#issuecomment-1986923227
Hi, @ggerganov , I saw your comment here at #4130
In order to resolve these, I think we should add a standard attention implementation where each sequence has it's own KV cache buffer and the attention is computed separately. This way, users would be able to choose which implementation to use based on their specific use case.
Is there any plan for this implementation? Sometimes greedy generations with different outcome can be a trouble.
No plan at the moment on my side. Haven't figure out a good way to implement this yet
I've been investigating the performance of models with batched inference. I had expected slightly different results based on the number of parallel sequences being evaluated (i.e. some small amount of random noise), but I have instead noticed a very distinct downward trend. i.e. more sequences leads to less accuracy on test set!
Is this expected?
Evaluating against the Google BoolQ dataset, vertical axis shows accuracy percentage (note it starts at 48%), horizontal axis shows number of sequences (each sequence answering an independent question):
This is not expected
Thanks for confirming that. I'll do some more digging into this to see if I can turn up anything more.
I tried running the BoolQ dataset again, but this time asking each question in N parallel sequences.
As far as I can tell this always produces the same answer across all sequences. no matter how many parallel sequences I run (up to 64). There's some variance in accuracy with different sequence counts, but nothing as huge as before. This is not what I had expected! Here's what that looks like:
Note that when running this test I made sure that no tokens were shared between sequences in the prompt batch, so each sequence is totally independent.
Using batched.cpp example, modified to use greedy sampling, yields different completions (sample output below). I'm using Windows, llama.cpp compiled by w64devkit on laptop with RTX3070.
Correct me if I'm wrong, but sampling with greedy sampler (i.e. always picking the most likely next token) should yield same result for same prompt, always (for same model).
Can this be a result of model quantizaton (I'm using 6K quantized llama2-chat gguf and tried also with 8 bit)? Note: llama.cpp was compiled with no CUDA, so this is all on CPU.