Open BNAadministrator3 opened 2 years ago
After a thorough review of the codes, I just find the paper "Channel Permutations for N:M Sparsity" did not publish their codes! The url given in the paper just points out the codes of naive ASP, which is devoid of the claimed channel permutation technology!!!!!!!!!!!!
@BNAadministrator3
Hi, I am the first author of the paper "Learning N:M Fine-grained Structured Sparse Neural Networks From Scratch", this paper has released the source code to prune models with arbitrary NM sparsity, you can refer to https://github.com/NM-sparsity/NM-sparsity.
Thank you very much. I have notied your great jobs! Moreover, the codes in https://github.com/NM-sparsity/NM-sparsity. are very clear and inspiring. Keep in touch @NM-sparsity PS: After this issue is posted soon, the relevent codes declared in the paper "Channel Permutations for N:M Sparsity" are uploaded in the repository..
I have read the paper "Channel permutation for NM sparsity" . I guess it is a great job. However, I am confused whether or not the codes that are publically available can be used to prune a network into the NM sparsity other than 2:4? E.g., 1:4, or 1:8. If the codes can be used to realize the function as I say above, either directly or only with minor modifiction, please let me know. Thank you anyway.