princeton-nlp / CoFiPruning

[ACL 2022] Structured Pruning Learns Compact and Accurate Models https://arxiv.org/abs/2204.00408
MIT License
188 stars 32 forks source link

More numbers on other sparsities #23

Closed GeneZC closed 1 year ago

GeneZC commented 2 years ago

CoFi is a great work which may benefit the research in related areas.

However, I have found the numbers of the task performance on other sparsities are not available. Could you please provide these numbers in detail?

Besides, metrics besides accuracy scores on GLUE would also be appreciated.

xiamengzhou commented 2 years ago

Hi, thanks for checking out our repo! We don't have results for models of a sparsity less than 60% because we rarely observed a performance drop compared to full models. But you can train models with our code for any sparsities you would like to test on!

I currently do not have the bandwidth to get more results/metrics :/ But you can use the script evaluation.py to get other metrics with a slight code change.

Let me know if you have more questions!

GeneZC commented 2 years ago

Thanks for the reply! I actually expect the numbers on sparsities like 70%, 80%, etc.

xiamengzhou commented 2 years ago

Oh! Do you mean the numbers in Figure 2 of the paper? I compiled it in a spreadsheet. Let me know if you need more data points!

GeneZC commented 2 years ago

Many thanks! I would appreciate the results, if available, on other datasets in Table 2 as well. Otherwise, I will have to produce the results by running the code.

xiamengzhou commented 2 years ago

Hi, unfortunately, we didn't run all the sparsities for all datasets. Let me know if you encounter any problems when producing the results!

xiamengzhou commented 1 year ago

Hi, I am closing this issue for now :) Feel free to reopen it if you have more questions.