Open merlinarer opened 1 year ago
Same proplem. It seems that the visual query is still late fusion in this implementations (but should be early fusion).
I realised the same issue so just commenting for support
Found this issue, llama_adapter_v2_multimodal/llama/llama_adapter.py is the adapter implementation for LLaMA-Adapter V1 paper. Would appreciate if you can release the adapter implementation for LLaMA-Adapter-V2 or the estimated timeline.
I found the same problem. What's more, I noticed that the scale factor mentioned in the V2 paper did not seem to exist in the implementation of llama_adapter_v2_multimodal7b/llama/llama_adapter.py. But it's explicitly shown in chat65b implemention in function forward_linear_with_scale_and_bias
Any help would be highly appreciated.
Thanks for sharing the codes. Llama_adapter_v2_multimodal seems to be the impl. of llama adpater v1 paper. Then, how to reimpl. the results in v2 paper?