horseee / LLM-Pruner

[NeurIPS 2023] LLM-Pruner: On the Structural Pruning of Large Language Models. Support Llama-3/3.1, Llama-2, LLaMA, BLOOM, Vicuna, Baichuan, TinyLlama, etc.
https://arxiv.org/abs/2305.11627
Apache License 2.0
800 stars 88 forks source link

Sparse Mask question #24

Closed coldplayers closed 12 months ago

coldplayers commented 1 year ago

Hi, I have a question about the sparsity of the weight: After merge lora into sparse weight will change sparse weight into dense?

VainF commented 1 year ago

Hi @coldplayers, LLM-pruner is a structural method. After pruning, we get a dense model.

coldplayers commented 12 months ago

@VainF Thanks.