Aligner2024 / aligner

Achieving Efficient Alignment through Learned Correction
https://aligner2024.github.io/
102 stars 5 forks source link

训练对齐器,显存溢出 #4

Closed angelOnly closed 3 months ago

angelOnly commented 4 months ago

看了训练流程图,我理解这个对齐器是不是在全参微调,我跑百川7b的模型,4090 24G的显卡,跑不起来,显存满了,只能换更大的显存吗?多大的显存合适? image

Aligner2024 commented 4 months ago

是的,对齐器也是在全参数微调,通过Q-A-C来进行残差微调。 训练一个7B的模型,和SFT所需要的资源是一致的,你可以考虑lora或者更小尺寸的aligner,比如可以拿qwen1.5-2B来训练对齐器,是可以直接在3090上训练了。

Yes, aligners are also fine-tuned with full-parameter fine-tuning, using Q-A-C for residual fine-tuning. Training a 7B model requires the same resources as SFT, you can consider using lora or a smaller size aligner, such as training an aligner with qwen1.5-2B, which can be trained directly on 3090s.