tianyi-lab / Cherry_LLM

[NAACL'24] Self-data filtering of LLM instruction-tuning data using a novel perplexity-based difficulty score, without using any other models
306 stars 21 forks source link

a confusion about data_by_IFD #3

Closed wangjingyeye closed 1 year ago

wangjingyeye commented 1 year ago

I have read your paper in detail. It's a great work. I still have a question: why not use cross_entropy or ppl directly when calculating IFD scores, but use log_softmax+nll_loss . I understand that log_softmax+nll_loss is the same as cross_entropy, so use cross_entropy or ppl should be possible, I don’t know if my understanding is correct. Thanks for taking the time out of your busy schedule to reply.

MingLiiii commented 1 year ago

Hi, thanks for asking~ I think the question you mentioned should be divided into two separate questions.

1 How about using cross_entropy loss? It will be totally fine, as you mentioned that log_softmax+nll_loss is the same as cross_entropy, so directly using it leads to no problems. I wrote it that way just because of personal custom. So you can directly use cross_entropy or even directly use loss = outputs.lossrather than calculating the loss for each token. In our later implementation, we also directly use loss = outputs.loss for simplicity.

2 How about using perplexity? It is also fine. The overall philosophy is the same but the equation should change a little. The ratio between perplexities will become the substruction of losses. We compared both methods and the resulting performances are almost the same. We will include the experiments in our next version paper.

Let me know if you have other questions~

wangjingyeye commented 1 year ago

Hi, thanks for asking~ I think the question you mentioned should be divided into two separate questions.

1 How about using cross_entropy loss? It will be totally fine, as you mentioned that log_softmax+nll_loss is the same as cross_entropy, so directly using it leads to no problems. I wrote it that way just because of personal custom. So you can directly use cross_entropy or even directly use loss = outputs.lossrather than calculating the loss for each token. In our later implementation, we also directly use loss = outputs.loss for simplicity.

2 How about using perplexity? It is also fine. The overall philosophy is the same but the equation should change a little. The ratio between perplexities will become the substruction of losses. We compared both methods and the resulting performances are almost the same. We will include the experiments in our next version paper.

Let me know if you have other questions~

Thank you for your detailed reply, which completely solved my problem.

I would like to ask another question. In the experimental configuration introduced in the paper, the batch size is 128, is it also 128 when conducting the pre-experiment? If so, will we only iterate once when our sample size is less than 128?Looking forward to your answer~

MingLiiii commented 1 year ago

Yes, the batch size is also 128 following the alpaca setting. We only train the pre-experienced model for 1 epoch to make the extra resources as few as possible.