zssloth / Embedded-Neural-Network

collection of works aiming at reducing model sizes or the ASIC/FPGA accelerator for machine learning
552 stars 121 forks source link

ICLR 2019 can be added #5

Closed ydc123 closed 5 years ago

ydc123 commented 5 years ago

Thanks for your work, I collect some papers in ICLR 2019 by manually. Can I help you complete this repository?

Poster Presentations: SNIP: SINGLE-SHOT NETWORK PRUNING BASED ON CONNECTION SENSITIVITY Rethinking the Value of Network Pruning Non-vacuous Generalization Bounds at the ImageNet Scale: a PAC-Bayesian Compression Approach Dynamic Channel Pruning: Feature Boosting and Suppression Energy-Constrained Compression for Deep Neural Networks via Weighted Sparse Projection and Layer Input Masking Slimmable Neural Networks RotDCF: Decomposition of Convolutional Filters for Rotation-Equivariant Deep Networks Dynamic Sparse Graph for Efficient Deep Learning Big-Little Net: An Efficient Multi-Scale Feature Representation for Visual and Speech Recognition Data-Dependent Coresets for Compressing Neural Networks with Applications to Generalization Bounds Learning Recurrent Binary/Ternary Weights Double Viterbi: Weight Encoding for High Compression Ratio and Fast On-Chip Reconstruction for Deep Neural Network Relaxed Quantization for Discretized Neural Networks Integer Networks for Data Compression with Latent-Variable Models Minimal Random Code Learning: Getting Bits Back from Compressed Model Parameters A Systematic Study of Binary Neural Networks' Optimisation Analysis of Quantized Models

Oral Presentations: The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks

zssloth commented 5 years ago

Thanks, contributions are welcome!