issues
search
horseee
/
LLM-Pruner
[NeurIPS 2023] LLM-Pruner: On the Structural Pruning of Large Language Models. Support Llama-3/3.1, Llama-2, LLaMA, BLOOM, Vicuna, Baichuan, TinyLlama, etc.
https://arxiv.org/abs/2305.11627
Apache License 2.0
880
stars
106
forks
source link
Is this method implementable on multi-GPUs?
#54
Open
LeonCheng0129
opened
8 months ago
BrownTan
commented
1 week ago
I also want to know
I also want to know