princeton-nlp / CoFiPruning

[ACL 2022] Structured Pruning Learns Compact and Accurate Models https://arxiv.org/abs/2204.00408
MIT License
191 stars 31 forks source link

Obtaining models having different target sparsity using single trained model #2

Closed chamecall closed 2 years ago

chamecall commented 2 years ago

Hello. First of all thanks for your great work. I have the following question. If we trained a model having for example ~90% target sparsity is it possible to get variations of the already trained model with decreasing sparsity like 75%, 50% etc or the only way to obtain different target sparsity is to retrain the model again with needed sparsity? Thanks in advance`)

xiamengzhou commented 2 years ago

Hi,

CoFi requires training a single model every time for a specific sparsity. But given a lower sparsity model, e.g., 75%, you can load the model and the l0_module back and reset the target sparsity to be a higher number, e.g. 90% and keep training with the pruning objective to get a 90% sparsity model. I never tried this but it should require a minimal change to the codebase to make it work.