Open ajktym94 opened 2 years ago
The structured sparsity of the model learned using Lottery simplifies and speeds up the computation on inference since a lot of weights are set to zero.
Since PyTorch does not support sparse operations, this most probably will not improve the inference speed right? I assume the pruned channels/filters/weights are just set to 0 and are not literally removed? They still take up the same memory as an unpruned model right?
If
open_lth
framework is used for Lottery Ticket Hypothesis experiments on a model, will it result in improvement in terms of inference speed/memory usage of the models?As far as I know, even if the models are pruned, they would use the same memory as earlier and hence would take the time and memory as before.