zama-ai / concrete-ml

Concrete ML: Privacy Preserving ML framework using Fully Homomorphic Encryption (FHE), built on top of Concrete, with bindings to traditional ML frameworks.
Other
995 stars 143 forks source link

How to use L1-norm unstructured pruning? #551

Closed maxwellgodv closed 7 months ago

maxwellgodv commented 7 months ago

Hello,How to use L1-norm unstructured pruning in cifar10?As mentioned in the paper "Deep Neural Networks for Encrypted Inference with TFHE".

andrei-stoian-zama commented 7 months ago

You can check out the step-by-step guide in the documentation!

maxwellgodv commented 7 months ago

Thanks,and I want to know what ’Active Neurons‘ represent?Is it the same as the max_non_zero parameter in the prune function?

  def prune(self, max_non_zero):
        # Linear layer weight has dimensions NumOutputs x NumInputs
        for name, layer in self.named_modules():
            if isinstance(layer, nn.Linear):
                print(name, layer)
                num_zero_weights = (layer.weight.shape[1] - max_non_zero) * layer.weight.shape[0]
                if num_zero_weights <= 0:
                    continue
                print(f"Pruning layer {name} factor {num_zero_weights}")
                prune.l1_unstructured(layer, "weight", amount=num_zero_weights)
                self.pruned_layers.add(name)
andrei-stoian-zama commented 7 months ago

The code given restricts the maximum number of "active" neuron connections to max_non_zero. An active neuron connection is a non-zero weight.

With unstructured pruning all neurons are still active, but some of their connections are removed.