alibaba / FederatedScope

An easy-to-use federated learning platform
https://www.federatedscope.io
Apache License 2.0
1.26k stars 206 forks source link

Smaller test/val loss but lower evaluation accuracy #750

Open shuangyichen opened 7 months ago

shuangyichen commented 7 months ago

When I finetune llama-7b on gsm-8k with different finetuning methods. I compared the test loss and evaluation accuracy of different methods and found that one of the method has smaller test/val loss but lower evaluation accuracy. Is it reasonable?

qbc2016 commented 7 months ago

Hello! It may be related to the scale of your dataset partition. If the test/val dataset is too small, then the loss will be unstable. On the other hand, the evaluation accuracy only depends on one exact value, which is parsed from the generated text, but the val/test loss is calculated among all the tokens the model generates. We also find that the validation loss may not be a reliable indicator of the generalization performance. For more details, please refer to our paper. Best regards,

shuangyichen commented 7 months ago

Hello! It may be related to the scale of your dataset partition. If the test/val dataset is too small, then the loss will be unstable. On the other hand, the evaluation accuracy only depends on one exact value, which is parsed from the generated text, but the val/test loss is calculated among all the tokens the model generates. We also find that the validation loss may not be a reliable indicator of the generalization performance. For more details, please refer to our paper. Best regards,

I wonder the phenomenon discussed in your paper is just in low-fidelity scenario or in general FL?

qbc2016 commented 7 months ago

Hello! It may be related to the scale of your dataset partition. If the test/val dataset is too small, then the loss will be unstable. On the other hand, the evaluation accuracy only depends on one exact value, which is parsed from the generated text, but the val/test loss is calculated among all the tokens the model generates. We also find that the validation loss may not be a reliable indicator of the generalization performance. For more details, please refer to our paper. Best regards,

I wonder the phenomenon discussed in your paper is just in low-fidelity scenario or in general FL?

In the paper, what we observe is in a low-fidelity scenario, but finetuning LLM in general FL, it may be interesting to investigate the relationship between val/test loss and the final evaluation accuracy. I'm not sure there's been a study on this。