This repository contains a Pytorch implementation of the paper "The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks" by Jonathan Frankle and Michael Carbin that can be easily adapted to any model/dataset.
325
stars
91
forks
source link
The implementation uses layer-wise pruning, while global pruning is usually used for large conv models #8
The prune_by_percentile function defined in main.py uses layer-wise pruning for all models. While it is found in the original LTH paper that global pruning works better for larger convolutional models such as ResNet and VGG.
The
prune_by_percentile
function defined inmain.py
uses layer-wise pruning for all models. While it is found in the original LTH paper that global pruning works better for larger convolutional models such as ResNet and VGG.