Open naranjuelo opened 7 years ago
@lukeyeager Hi, any progress in this direction yet?
I would also be interested in a pruning feature being added. Here is an additional source talking about how pruning can lead to smaller and faster networks without decreasing accuracy substantially. https://jacobgil.github.io/deeplearning/pruning-deep-learning
Hi! I was wondering if there is any possibility in Digits to compress a neural network.
In the paper "Deep compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding" (https://arxiv.org/abs/1510.00149), they could reduce for example the size of VGG-16 by 49x without affecting its accuracy (that's very interesting in networks that require so much memory).
Any way to remove the connections with weights below a threshold? Thank you very much!