voidism / DoLa

Official implementation for the paper "DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models"
https://arxiv.org/abs/2309.03883
419 stars 50 forks source link

Some questions about the idea of the paper #6

Closed Rh-Dang closed 6 months ago

Rh-Dang commented 1 year ago

I'm very interested in your work, but I'm confused about some of the ideas in the paper. In this paper, it is believed that the output of shallow layer pays more attention to grammatical coherence, and LLM will inject more knowledge with the deepening of layers. So you attenuate the illusion of the output by suppressing the distribution of the shallow output. But while I admit that shallow output does lack the necessary knowledge. But this does not mean that the shallow distribution is the inverse of the true distribution. That is, words with small probability of shallow output not must be good, and words with large probability not must be bad. Therefore, I am skeptical of your theoretical interpretation of this contrast method.

voidism commented 8 months ago

Hi,

words with small probability of shallow output not must be good, and words with large probability not must be bad

I agree with your points here, however, we didn't claim shallow distribution is the inverse of the true distribution If we simply take the inverse of shallow distribution, it is not possible to get the true distribution.

Instead, we claim that the difference between higher-layer distribution and shallow distribution is the true distribution Our idea is that if a word has a large probability at shallow layer output, but a relatively smaller probability at final layer output, it means that this word is not considered as correct answer by higher layers so the probability is decreasing. Let me know if this answers to your question.