horseee / LLM-Pruner

[NeurIPS 2023] LLM-Pruner: On the Structural Pruning of Large Language Models. Support Llama-3/3.1, Llama-2, LLaMA, BLOOM, Vicuna, Baichuan, TinyLlama, etc.
https://arxiv.org/abs/2305.11627
Apache License 2.0
886 stars 106 forks source link

How to prune 20% of parameters? #80

Closed sidhantls closed 1 month ago

sidhantls commented 2 months ago

In the readme, a --pruning_ratio 0.25 is used and it's mentioned it prunes 20% of parameters. Why is this? If I want to prune 10%, should I use --pruning_ratio 0.15?