google-research / FLAN

Apache License 2.0
1.48k stars 156 forks source link

Could you share the training loss to improve reproducibility? #83

Open xuanqing94 opened 1 year ago

xuanqing94 commented 1 year ago

Hi, thanks for sharing the datasets! I'm trying to train a flan model using t5 and other backbone models. However i'm not confident enough on how well I reproduced your results. Specifically I got much lower MMLU scores. Could you please share the training loss curve (or simply the loss at convergence?) Below is mine:

image

I was using similar settings (batch size = 80, max_seq_len = 2300) The final loss is around 0.6 after smoothing. What about the official values?

StephennFernandes commented 1 year ago

hey could you please let me know where can I find the scripts/.gin files to train FLAN on t5x based models ?

xuanqing94 commented 1 year ago

@StephennFernandes I can't help you on that because I am using PyTorch based training framework.

StephennFernandes commented 1 year ago

you mean you used the huggingface model and fine tuned it on FLAN datasets ?

that works fine for me as well.

btw did you get relatively similar results to what the official FLAN-T5 has ?

xuanqing94 commented 1 year ago

I use checkpoints downloaded from huggingface, but I ran with my in-house distributed training code.

I only tested and compared with FLAN-T5 it on MMLU dataset. It turns out that my results are way lower than the official FLAN-T5 checkpoints.