-
hi @wenwei202 , Thank you for your sharing. Based on your paper, I implemented [ZJCV/SSL](https://github.com/ZJCV/SSL) using pytorch, including `train-prune-finetuing` for VGGNet and ResNet
* For V…
-
Anyone has interest to utilize the sparsity to accelerate DNNs?
I am working on the fork https://github.com/wenwei202/caffe/tree/scnn and currently, on average, achieve ~5x CPU and ~3x GPU layer-wi…
-
-
SSL-master/basicsr/pruner/SSL_pruner.py中def optimize_parameters(self, current_iter)设置训练终止条件
if self.prune_state in ["stabilize_reg"] and self.total_iter - self.iter_stabilize_reg == self.args.stabili…
-
I have read the paper "Channel permutation for NM sparsity" . I guess it is a great job.
However, I am confused whether or not the codes that are publically available can be used to prune a network i…
-
I have a alexnet caffemodel with zero-column and zero-row weights. Using `conv_mode: LOWERED_CCNMM`, I got speedup on CPU (like structured sparsity=75%, speedup=3.1x), but on GPU, there is no speedup…
-
In the code, what are their meaning that LOWERED_CSRMM,LOWERED_CCNMM and DIRECT_SCONV?
In the process of making sparsification, whether you use existing library, example mkl on CPU and CUDA on GPU
-
[Unified robust training for graph neural networks against label noise](https://link.springer.com/chapter/10.1007/978-3-030-75762-5_42)
```bib
@inproceedings{li2021unified,
title={Unified robust …
-
# PaddleSlim量化
![image](https://user-images.githubusercontent.com/1312389/170643197-8a42af2b-b696-4363-ac3a-29a582642162.png)
PaddleSlim主要包含三种量化方法:量化训练(Quant Aware Training, QAT)、动态离线量化(Post Train…
-
https://arxiv.org/abs/1805.07836
https://nips.cc/media/nips-2018/Slides/12761.pdf
I think we can improve on this with our genralized nll loss function
* Adding new output base
* Using the bas…