Great job!
One question: Are you using full finetuning throughout the entire process? Have you tried LoRA's performance?
I remember in the previous work (https://arxiv.org/pdf/2401.00368.pdf), which also using LLM as embedder, they use LoRA and still get good performance.
Great job! One question: Are you using full finetuning throughout the entire process? Have you tried LoRA's performance?
I remember in the previous work (https://arxiv.org/pdf/2401.00368.pdf), which also using LLM as embedder, they use LoRA and still get good performance.