Open h-ccc opened 1 year ago
Thanks for your wonderful work. The bottle-neck of MOSS may lie in datasets used in the pretrain phase. We want to continue pretrain MOSS on multi datasets like 悟道, Wikipedia and so on. Is there any way to continue pretrain? Thanks in advance!
Thanks for your wonderful work. The bottle-neck of MOSS may lie in datasets used in the pretrain phase. We want to continue pretrain MOSS on multi datasets like 悟道, Wikipedia and so on. Is there any way to continue pretrain? Thanks in advance!