tianyi-lab / Cherry_LLM

[NAACL'24] Self-data filtering of LLM instruction-tuning data using a novel perplexity-based difficulty score, without using any other models
287 stars 19 forks source link

Need help: the loss curve is strange. #1

Closed ifshine closed 1 year ago

ifshine commented 1 year ago

Thanks for your excellent work! Question in title(when I train the final cheery model).

image
MingLiiii commented 1 year ago

Hi, thank you very much for your interest in this work!

Firstly, I would like to declare that this problem does not come from our selected data but probably comes from the Stanford alpaca codebase. You can find our training losses of different models on our hugging face repo: https://huggingface.co/MingLiiii/cherry-alpaca-5-percent-7B/blob/main/trainer_state.json

Then, for this problem, I think directly downgrading the transformers into 4.28.1 will solve this problem: pip install transformers==4.28.1 and probably you need to re-install wandb: pip install wandb

You can find similar problems here: https://github.com/tatsu-lab/stanford_alpaca/issues/298 https://github.com/tloen/alpaca-lora/issues/418 https://github.com/tloen/alpaca-lora/issues/170

Hope it works for you. Please let me know if you have any other questions~

ifshine commented 1 year ago

As a beginner, when I see the loss curve becoming very strange, I feel at a loss.

Deeply thank you for your quick and detailed response. The loss curve is normal now.

(I encountered a very oscillatory loss curve while running code for some other projects. I wonder if it would be convenient for you to provide some debugging suggestions.)

MingLiiii commented 1 year ago

Hi, thank you for asking, but I doubt if I can fix your problem since the loss curves should be really task-specific, and I am really not an expert.

Anyways, you can send me an email if you want for more discussions or something like that.