Closed lihongxiacream closed 2 months ago
Firstly, thank you for your interest in our work. The calculation of IFD scores requires the inference on LLMs, thus it's naturally time-consuming. However, we also proposed Superfiltering(ACL'24), which utilizes small language models like GPT2 to select the data rather than LLMs, it tremendously lowers the time and cost for the data selection process. If efficiency is important to you, please try it.
Secondly, you did not provide enough information for your observation:
Again, thank you for your interest! We highly recommend you try our Superfiltering(ACL'24) if efficiency is important to you!
Thank you for your answer!! The sample is indeed very large, which is 458 MB . I just use the question and answer at each turn instead of history, and I use Qwen1.5-7B-Chat Model and a A800 gpu. I calculate the loss once every turn during data analysis. Do you have any good ideas to accelerate inference. Thank you again and I will also try to use Superfiltering Method.
And does this project support Chinese datasets selection?
Thank you for your interest!
Based on your data, I think it is quite reasonable that it will cost a lot of hours. Though it has only 50k samples, the size is almost 20 times of the alpaca data. Unfortunately, I am no expert on accelerating inferences, sorry about that.
As for whether this method supports Chinese datasets, I think the answer should be yes. Our method is a language-agnostic method, it computes and compares the losses/perplexities generated by base models. So if the base model itself supports other language, then our method should be useful.
If you are interested in our method or have further questions, we can also add WeChat friends for better communication. Please send me an email if you are interested!
Thank you!
Thanks!! And I also have some questions about Superfiltering and I have sent my Wechat ID through the email. Looking forward to your reply!!!
I don't think I received your email. Please check if you send the wrong address.
it costs 100 hours to select from 50000 multi_turn samples