Open dywu98 opened 1 month ago
Thanks for your amazing work!
However, when I was running the command for ResNets on ImageNet x3.06 using an A100(80G) gpu, the memory usage continues increase from 27G to 79G and eventually triggers OOM error.
I am wondering what's the requirements that nedded to run the code. How can I fix this OOM problem?
i didnt get this error before. Could u post your script here to reproduce the problem?
Here's the script:
CUDA_VISIBLE_DEVICES=0,1 python main.py --prune_method opp --opp_scheme v5 --lw_opp 1000 --update_reg_interval 5 --stabilize_reg_interval 40000 --dataset imagenet -a resnet50 --pretrained --lr_ft 0:0.01,30:0.001,60:0.0001,75:0.00001 --epochs 90 --batch_size_prune 256 --batch_size 256 --index_layer name_matching --stage_pr *layer[1-3]*conv[1-2]:0.68,*layer4*conv[1-2]:0.5 --experiment_name TPP__resnet50__imagenet__3.06x_PR0.680.5 -j 32
CUDA_VISIBLE_DEVICES=xxx
is also used to specify the GPU devices.
I have also tried to use more GPUs (2 A100), but the same OOM error happens as well.
It seems that only the memory usage on the first GPU will increase, the memeory usage on the second GPU remains unchanged.
Thanks for your amazing work! However, when I was running the command for ResNets on ImageNet x3.06 using an A100(80G) gpu, the memory usage continues increase from 27G to 79G and eventually triggers OOM error. I am wondering what's the requirements that nedded to run the code. How can I fix this OOM problem?