facebookresearch / open_lth

A repository in preparation for open-sourcing lottery ticket hypothesis code.
MIT License
624 stars 113 forks source link

Does open_lth give inference speed/memory usage improvement? #18

Open ajktym94 opened 2 years ago

ajktym94 commented 2 years ago

If open_lth framework is used for Lottery Ticket Hypothesis experiments on a model, will it result in improvement in terms of inference speed/memory usage of the models?

As far as I know, even if the models are pruned, they would use the same memory as earlier and hence would take the time and memory as before.

2016312357 commented 1 year ago

The structured sparsity of the model learned using Lottery simplifies and speeds up the computation on inference since a lot of weights are set to zero.

ajktym94 commented 1 year ago

Since PyTorch does not support sparse operations, this most probably will not improve the inference speed right? I assume the pruned channels/filters/weights are just set to 0 and are not literally removed? They still take up the same memory as an unpruned model right?