issues
search
huawei-noah
/
Efficient-Computing
Efficient computing methods developed by Huawei Noah's Ark Lab
1.19k
stars
208
forks
source link
binary-neural-networks
knowledge-distillation
model-compression
pruning
quantization
self-supervised
readme
Efficient Computing
This repo is a collection of Efficient-Computing methods developed by Huawei Noah's Ark Lab.
Data-Efficient-Model-Compression
is a series of compression methods with no or little training data.
BinaryNetworks
: Binary neural networks including
AdaBin (ECCV22)
.
Distillation
: Knowledge distillation methods including
ManifoldKD (NeurIPS22)
and
VanillaKD (NeurIPS23)
.
Pruning
: Network pruning methods including
GAN-pruning (ICCV19)
,
SCOP (NeurIPS20)
,
ManiDP (CVPR21)
, and
RPG (NeurIPS23)
.
Quantization
: Model quantization methods including
DynamicQuant (CVPR22)
.
Self-supervised
: self-supervised learning including
FastMIM
and
LocalMIM (CVPR23)
.
TrainingAcceleration
: Accelerating neural network training via
NetworkExpansion (CVPR23)
.
Detection
: Efficient object detectors including
Gold-YOLO (NeurIPS23)
.
LowLevel
: Efficient low level vision models including
IPG (CVPR24)
.