zyxxmu / LBC

Pytorch implementation of our paper accepted by NeurIPS 2022 -- Learning Best Combination for Efficient N:M Sparsity
15 stars 2 forks source link

Learning Best Combination for Efficient N:M Sparsity

Pytorch implementation of our paper accepted by NeurIPS 2022 -- "Learning Best Combination for Efficient N:M Sparsity" (Link)

Data Preparation

ImageNet
├── train
│   ├── folder 1 (class 1)
│   ├── folder 2 (class 1)
│   ├── ...
├── val
│   ├── folder 1 (class 1)
│   ├── folder 2 (class 1)
│   ├── ...

Requirements

Re-produce our results

cd ResNet
python imagenet.py --job_dir PATH_TO_JOB_DIR --t_i 0 --t_f 60 --gpus 0 1 2 3 --train_batch_size 256 --eval_batch_size 256 --lr 0.1 --label_smoothing 0.1 --N 2 --M 4 --data_path PATH_TO_DATASETS
cd DeiT-small
python3 -m torch.distributed.launch --nproc_per_node=4  --use_env main.py --model vit_deit_small_patch16_224 --batch-size 256 --data-path PATH_TO_DATASETS --output_dir PATH_TO_JOB_DIR

Besides, we provide our trained models and experiment logs at Google Drive. To test, run:

cd ResNet
python eval.py --pretrain_dir PATH_TO_CHECKPOINTS --gpus 0 --train_batch_size 256 --eval_batch_size 256  --label_smoothing 0.1 --N 2 --M 4 --data_path PATH_TO_DATASETS
cd DeiT-small
python3 -m torch.distributed.launch --nproc_per_node=4  --use_env main.py --model vit_deit_small_patch16_224 --batch-size 256 --data-path PATH_TO_DATASETS --output_dir PATH_TO_JOB_DIR --resume PATH_TO_CHECKPOINTS --eval