AIoT-MLSys-Lab / Efficient-LLMs-Survey

[TMLR 2024] Efficient Large Language Models: A Survey
https://arxiv.org/abs/2312.03863
970 stars 82 forks source link

There is a paper in structured pruning which I think is not related to Model Pruning #35

Open Michael-jze opened 2 weeks ago

Michael-jze commented 2 weeks ago

This Paper: Perplexed by Perplexity: Perplexity-Based Data Pruning With Small Reference Models, arXiv, 2024 [Paper] I believe is using small model to pruning dataset in order to have a better performance in large model training. Should it be grouped into Data Selection scope?

image