VainF / Torch-Pruning

[CVPR 2023] Towards Any Structural Pruning; LLMs / SAM / Diffusion / Transformers / YOLOv8 / CNNs
https://arxiv.org/abs/2301.12900
MIT License
2.59k stars 318 forks source link

Time inference is same after pruning model by Pytorch #84

Open AouatifZ opened 1 year ago

AouatifZ commented 1 year ago

Hello

Please, I have a problem concerning the optimization of YOLOX model by Pytorch Pruning (Global & Local) I got the same inference time after the optimization, I don't understand why ?

is it that the prune model by Pytoch does not remove the tensors which have equal to 0 after the equalization or there is another thing

any help

thanks in advance

VainF commented 1 year ago

Hi @AouatifZ, the pruned parameters will be removed from the model. Could you provide more information like the outputs of print(yolo_model) before & after pruning?

AouatifZ commented 1 year ago

this is my code using for prune model

parameters_to_prune=[] for module_name,module in model.named_modules(): if isinstance(module,torch.nn.Conv2d): parameters_to_prune.append((module,'weight')) prune.global_unstructured( parameters_to_prune, pruningmethod=prune.L1Unstructured, amount=0.5, ) for module, in parameters_to_prune: prune.remove(module,'weight')

I saved my tensors before and after pruning in a text file

after_prune.txt before_prune.txt

result time before prune model : 0.3s

result time after prune model : 0.3s

VainF commented 1 year ago

Unstructural pruning does not remove parameters. Instead it only masks parameters with binary masks. If you would like to accelerate the inference, please use the structural one as metioned in README.

VainF commented 1 year ago

BTW, this tutorial in branch v1.0 might be helpful! https://github.com/VainF/Torch-Pruning/blob/v1.0/tutorials/0%20-%20QuickStart.ipynb

AouatifZ commented 1 year ago

thank you very much for your help

I will try to use these techniques in my case