Alpha-VLLM / LLaMA2-Accessory

An Open-source Toolkit for LLM Development
https://llama2-accessory.readthedocs.io/
Other
2.68k stars 169 forks source link

Embedding Concatenation in forward() Function #32

Closed qihan96 closed 1 year ago

qihan96 commented 1 year ago

Hi! Thanks for helping out with my questions. Why is the embedding concatenation different in forward() and forward_inference()? https://github.com/Alpha-VLLM/LLaMA2-Accessory/blob/625dae647fef5d214dde9fb8b2900923c95d8b1f/accessory/model/LLM/llama_qformerv2_peft.py#L369 https://github.com/Alpha-VLLM/LLaMA2-Accessory/blob/625dae647fef5d214dde9fb8b2900923c95d8b1f/accessory/model/LLM/llama_qformerv2_peft.py#L394

ChrisLiu6 commented 1 year ago

I think they are the same from the computation perspective. There's no special consideration here :)

ChrisLiu6 commented 1 year ago

Oh sorry! My bad, that's a bug. Thank you for your reminder. It's fixed now.

qihan96 commented 1 year ago

Thanks for following up! How is the difference influencing the code results?

ChrisLiu6 commented 1 year ago

Thanks for following up! How is the difference influencing the code results?

Well, the difference does not seem to be very significant.

qihan96 commented 1 year ago

Got it. Thank you for the timely response. Appreciated.