Open wing7171 opened 1 year ago
Hi, I have problem in fine-tunning sgpt-bloom-7b1-msmarco because of oom error, could you please share how you do contrasive fine-tuning on bloom-7b1? (I think distributed training is needed, but I failed ..)
The command i used is here: https://huggingface.co/bigscience/sgpt-bloom-7b1-msmarco I ran on 8 A100 GPUs w/ 80GB I think
Hi, I have problem in fine-tunning sgpt-bloom-7b1-msmarco because of oom error, could you please share how you do contrasive fine-tuning on bloom-7b1? (I think distributed training is needed, but I failed ..)