Open EliverQ opened 1 year ago
Hi, Thanks a lot for your interest in the INSTRUCTOR!
I trained the INSTRUCTOR model with a single GPU, and the setting is to train 20K steps. In general, the parameters may not be fixed and may be adjusted for various purposes.
Thank you! I have other questions. Apologies again for the inconvenience caused by my inquiries.
What about INSTRUCTOR-base and INSTRUCTOR-xl? Did you use the exact same experimental setup as INSTRUCTOR-large? Have you completed their training in 20,000 steps as well? If so, could you kindly provide me with the convergence step corresponding to the models you mentioned in the paper?
Yes, we adopt the same setting with minor modifications to adapt different machines. For details, you may refer to https://github.com/HKUNLP/instructor-embedding/issues/42.
Hi, Thanks a lot for your interest in the INSTRUCTOR!
I trained the INSTRUCTOR model with a single GPU, and the setting is to train 20K steps. In general, the parameters may not be fixed and may be adjusted for various purposes.
And I have tried the exact same setting as yours, it still doesn't work. I don't know how to continue improving and replicate your performance. This is truly very troubling to me.
Hello! I must say, Instructor is truly an amazing project, and I'm eager to replicate your training process. Nevertheless, despite following your training settings, I'm unable to achieve comparable performance to the INSTRUCTOR.
Here are the settings I have taken according to your paper:
Regarding the hardware, I'm utilizing 4 80G A100 GPUs with a batch size of 4, as you mentioned in #42. I've trained 10k steps.
I'd greatly appreciate it if you could share more details or suggest potential methods for improvement. Thank you very much!