Closed maxwellgodv closed 7 months ago
You can check out the step-by-step guide in the documentation!
Thanks,and I want to know what ’Active Neurons‘ represent?Is it the same as the max_non_zero parameter in the prune function?
def prune(self, max_non_zero):
# Linear layer weight has dimensions NumOutputs x NumInputs
for name, layer in self.named_modules():
if isinstance(layer, nn.Linear):
print(name, layer)
num_zero_weights = (layer.weight.shape[1] - max_non_zero) * layer.weight.shape[0]
if num_zero_weights <= 0:
continue
print(f"Pruning layer {name} factor {num_zero_weights}")
prune.l1_unstructured(layer, "weight", amount=num_zero_weights)
self.pruned_layers.add(name)
The code given restricts the maximum number of "active" neuron connections to max_non_zero
. An active neuron connection is a non-zero weight.
With unstructured pruning all neurons are still active, but some of their connections are removed.
Hello,How to use L1-norm unstructured pruning in cifar10?As mentioned in the paper "Deep Neural Networks for Encrypted Inference with TFHE".