-
one GTX1070 8G
64G RAM
width= 608
height= 608
random=1
all settings are default, i do not change anything.
i have tried those combinations, all failed!!!!
RuntimeError: CUDA error: out o…
-
作者您好:
我使用您的代码在cifar10和100dataset上分别训练了vgg16、19,resnet56、164。但无论是baseline和train w\ sparsity的结果都和您给出的结果有较大的gap。具体结果如下,想问一下您造成这样的原因:
![image](https://user-images.githubusercontent.com/44216841/80172707…
-
## 🚀 Feature
support mainstream pruning techniques.
## Motivation
Recently, lots of new pruning algorithms are proposed, but the [current implementation](https://github.com/pytorch/pytorch/blob/4…
-
Mostly an ESP8266 issue, but it occurs on ESP32 at higher numbers of devices.
When doing device discovery, when the Alexa device sends `GET /api/2WLEDHardQrI3WHYTHoMcXHgEspsM8ZZRpSKtBQr/lights HTTP…
pvint updated
2 years ago
-
Hi @liuzhuang13,
Thank you for a great work. I saw that you leveraged scaling factors of Batch normalization to prune incoming and outgoing weights at conv layers, However in DenseNet after a basic…
-
An error occurred near dockerfile:174 during docker build.
It says it ended with exit code 127, but
I don't know what to reset.
--------------------- error log ----------------
Container…
-
训练过程如下
```
[Epoch 0/20, Batch 13/2302] [Losses: x 0.204015, y 0.214396, w 4.140403, h 1.606703, conf 30.286903, cls 0.171045, total 36.623466, recall: 0.00000, precision: 0.00000]
[Epoch 0/20, Batc…
-
bn_module.weight.grad.data.add_(s * torch.sign(bn_module.weight.data)) # L1
不理解将L1范式加到grad上后怎样来使得gama值变成0,以便于后续裁剪?麻烦知道的朋友告知一下,谢谢了
-
在我的认知里面,我一直以为所谓shallow和deep features,指的是shallow 和deep block的feature。例如Unet中,E1的输出是shallow feature,E3的输出是deep feature。但是在您的论文中,您提到**Taking the output of E1 as an illustration, we calculate the feature …
-
It is like an unloved child received all the garbage gift from people don't know where to put their stuff.
Yet I am forced to inherent all those via transitive dependency.
cumulus-pallet-xcmp-qu…