tianyi-lab / Cherry_LLM

[NAACL'24] Self-data filtering of LLM instruction-tuning data using a novel perplexity-based difficulty score, without using any other models
287 stars 19 forks source link

why is the process so slow #21

Closed lihongxiacream closed 2 months ago

lihongxiacream commented 2 months ago

it costs 100 hours to select from 50000 multi_turn samples

MingLiiii commented 2 months ago

Firstly, thank you for your interest in our work. The calculation of IFD scores requires the inference on LLMs, thus it's naturally time-consuming. However, we also proposed Superfiltering(ACL'24), which utilizes small language models like GPT2 to select the data rather than LLMs, it tremendously lowers the time and cost for the data selection process. If efficiency is important to you, please try it.

Secondly, you did not provide enough information for your observation:

  1. Since this method was originally used for single-round data, how did you implement it for multi-round samples? Calculate IFD once every turn? Did you use the whole previous conversations when calculating or just use the question at each turn?
  2. How large are your 50k multi-turn samples? 50k is not a small number, even on 50k simple Alpaca data, it needs several hours. If the questions/answers are long in your sample, and there are a lot of rounds in each sample, it should definitely cost a lot more hours. Maybe you should first estimate the token count and inference count.
  3. What base LLM did you use?
  4. What GPU did you use?

Again, thank you for your interest! We highly recommend you try our Superfiltering(ACL'24) if efficiency is important to you!

lihongxiacream commented 2 months ago

Thank you for your answer!! The sample is indeed very large, which is 458 MB . I just use the question and answer at each turn instead of history, and I use Qwen1.5-7B-Chat Model and a A800 gpu. I calculate the loss once every turn during data analysis. Do you have any good ideas to accelerate inference. Thank you again and I will also try to use Superfiltering Method.

lihongxiacream commented 2 months ago

And does this project support Chinese datasets selection?

MingLiiii commented 2 months ago

Thank you for your interest!

Based on your data, I think it is quite reasonable that it will cost a lot of hours. Though it has only 50k samples, the size is almost 20 times of the alpaca data. Unfortunately, I am no expert on accelerating inferences, sorry about that.

As for whether this method supports Chinese datasets, I think the answer should be yes. Our method is a language-agnostic method, it computes and compares the losses/perplexities generated by base models. So if the base model itself supports other language, then our method should be useful.

MingLiiii commented 2 months ago

If you are interested in our method or have further questions, we can also add WeChat friends for better communication. Please send me an email if you are interested!

Thank you!

lihongxiacream commented 2 months ago

Thanks!! And I also have some questions about Superfiltering and I have sent my Wechat ID through the email. Looking forward to your reply!!!

MingLiiii commented 2 months ago

I don't think I received your email. Please check if you send the wrong address.