VainF / Torch-Pruning

[CVPR 2023] DepGraph: Towards Any Structural Pruning
https://arxiv.org/abs/2301.12900
MIT License
2.7k stars 332 forks source link

Memory of Model Increases and Inference Stays the Same After Pruning #33

Open ydneysay opened 3 years ago

ydneysay commented 3 years ago

The memory of my pytorch model increases after I save it to my directory using torch.save(). Also, the inference of my model does not really speed up. Shouldn't it decrease the memory and increase inference since it is structured pruning?

VainF commented 3 years ago

Hi @ydneysay

Could you provide a minimal example to reproduce this issue?

Zhiwei-Zhai commented 1 year ago

Hi,

I have a similar issue. I used the high level pruner "MagnitudePruner" for Mask-rcnn pruning, with iterative_steps = 1. The number of model paramers is reducec from 44M to 15.5M.

However, the inference after pruning is getting slower. image

kewang-seu commented 1 year ago

Hi,

I have a similar issue. I used the high level pruner "MagnitudePruner" for Mask-rcnn pruning, with iterative_steps = 1. The number of model paramers is reducec from 44M to 15.5M.

However, the inference after pruning is getting slower. image

Hi, Have you solved this problem? Now I have a similar problem. The inference time has not changed after pruning.

VainF commented 1 year ago

Hello, if your model cannot fully utilize GPUs (less than 100%), width pruning may not lead to a significant acceleration of your model. In this case, increasing the batch size can show some improvements in speed.

J0eky commented 11 months ago

@ydneysay @Zhiwei-Zhai @kewang-seu Hi, have you solved the problem? In my case, the inference time increased after pruning.