Closed weitianxin closed 10 months ago
Thank you for your valuable advice! Our survey paper and github repository only focuses on billions of parameter LLMs and above, if you have any suggestions for efficient LLMs related papers, please continue to give us new suggestions.
Thank you for conducting such an insightful survey. I wonder if it's possible to incorporate a recent ICML'23 work from UIUC. It centered on the one-shot compression technique of Pre-trained Langugae Models. This paper investigates the neural tangent kernel (NTK) of the multilayer perceptrons (MLP) modules in a PLM and propose to coin a lightweight PLM through NTK-approximating MLP fusion.