CASIA-IVA-Lab / FLAP

[AAAI 2024] Fluctuation-based Adaptive Structured Pruning for Large Language Models
https://arxiv.org/abs/2312.11983
Apache License 2.0
30 stars 4 forks source link

how much data was used in the pruning process? #4

Closed wyxscir closed 3 months ago

wyxscir commented 5 months ago

I would like to know how much data was used in the pruning process? Is it just like the example code where nsample=1024, indicating that only 1024 data were used to determine the pruning results?

an-yongqi commented 3 months ago

Yes, you are correct. In our pruning process, the default number of samples (nsamples) is set to 1024, as specified in the scripts/llama_7b.sh script. This is consistent across other scripts as well, where nsamples is set to 1024. The results presented in our paper are based on using nsamples=1024 for the pruning analysis. This number was chosen to balance between computational efficiency and the representativeness of the sample for determining effective pruning outcomes. If you have any further questions or need clarification on other aspects of our work, please do not hesitate to reach out.

wyxscir commented 3 months ago

Thanks for your reply, I understand