Closed notoookay closed 1 month ago
Hi, After I trained llama2-7b model with lora, I tried to merge it, it seems like resizing token embeddings of the base model should before merging, I reordered the process, and it worked. Can you check the correctness?
Thanks, this seems to work fine.
Hi, After I trained llama2-7b model with lora, I tried to merge it, it seems like resizing token embeddings of the base model should before merging, I reordered the process, and it worked. Can you check the correctness?