AkiraTOSEI / ML_papers

ML_paper_summary(in Japanese)
5 stars 1 forks source link

PRUNING NEURAL NETWORKS AT INITIALIZATION: WHY ARE WE MISSING THE MARK? #132

Open AkiraTOSEI opened 3 years ago

AkiraTOSEI commented 3 years ago

TL;DR

A study that evaluated methods that prune from initialization. They are better than random, but consistently perform worse than methods that prune after training. In addition, shuffling weights within each layer or reinitializing weights results in equal or better accuracy rather than degradation. Furthermore, these can be replaced by a method that determines the rate of weight pruning per layer rather than the rate of weights to be pruned image

Why it matters:

Paper URL

https://arxiv.org/abs/2009.08576

Submission Dates(yyyy/mm/dd)

2020/09/18

Authors and institutions

Jonathan Frankle, Gintare Karolina Dziugaite, Daniel M. Roy, Michael Carbin

.

Methods

Results

Comments