google / prompt-to-prompt

Apache License 2.0
2.98k stars 279 forks source link

learned linear projections l_Q, l_K, l_V #46

Open cuixing61 opened 1 year ago

cuixing61 commented 1 year ago

Hi, this question is about the linear projections l_Q, l_K, l_V of the attention module in the paper Prompt-to-Prompt. The paper illustrated that the linear projections are learnable. However, in the introduction, it is claimed that "this method does not requires model training". The two expressions seem to contradict each other. How do you learn the papamaters of the l_Q, l_K, l_V?

david20571015 commented 11 months ago

In the attention module, l_Q, l_K, and l_V are designed to be learnable. However, since this paper utilizes pretrained models, there is no necessity to train them.