mit-han-lab / gan-compression

[CVPR 2020] GAN Compression: Efficient Architectures for Interactive Conditional GANs
Other
1.1k stars 150 forks source link

Question about testing the compressed model #96

Closed joe8767 closed 2 years ago

joe8767 commented 2 years ago

Thank you for sharing your excellent research.

Following the steps described in “docs/tutorials/fast_gan_compression.md”, I downloaded the pretrained once-for-all supernet model using the provided commmand as follows:

python scripts/download_model.py --model cycle_gan --task horse2zebra_fast --stage supernet

After that, I directly use the provided command to export a compressed model.

bash scripts/cycle_gan/horse2zebra_fast/export.sh 16_16_24_16_32_64_16_24

export.sh:

#!/usr/bin/env bash 
python export.py \ 
 --input_path pretrained/cycle_gan/horse2zebra_fast/supernet/latest_net_G.pth \ 
 --output_path logs/cycle_gan/horse2zebra_fast/compressed/latest_net_G_released.pth \ 
 --ngf 64 --config_str $1 

Command and script for testing.

bash scripts/cycle_gan/horse2zebra_fast/test_compressed.sh
#!/usr/bin/env bash 
python test.py --dataroot database/horse2zebra/valA \ 
 --dataset_mode single \ 
 --results_dir results-pretrained/cycle_gan/horse2zebra_fast/compressed \ 
 --config_str 16_16_24_16_32_64_16_24 \ 
 --restore_G_path logs/cycle_gan/horse2zebra_fast/compressed/latest_net_G_released.pth \ 
 --need_profile \ 
 --real_stat_path real_stat/horse2zebra_B.npz \ 
 --gpu_ids 0 

Testing the performance of the exported compressed model, it produces MACs: 2.641G, Params: 0.355M, and fid score: 96.50.

So my concern is, am I supposed to get a better fid at this step without any fine-tuning? (such as the fid reported in the table in README.md is 65.19). Are there any thing I missed?

lmxyy commented 2 years ago

Sorry for the confusion! This is because our pre-trained final compressed model is not exported from this pre-trained supernet as the previous supernet was deprecated due to the outdated code. You could search for the subnet yourself by

python evolution_search.py --dataroot database/horse2zebra/trainA \
  --dataset_mode single --phase train \
  --restore_G_path pretrained/cycle_gan/horse2zebra_fast/supernet/latest_net_G.pth \
  --output_dir logs/cycle_gan/horse2zebra_fast/supernet/evolution \
  --ngf 64 --batch_size 32 \
  --config_set channels-64-cycleGAN --mutate_prob 0.4 \
  --real_stat_path real_stat/horse2zebra_B.npz --budget 3e9 \
  --weighted_sample 2 --meta_path datasets/metas/horse2zebra/train2A.meta

You are supposed to find such a configuration

fid: 64.79, config_str: 16_16_32_16_32_64_16_16, macs: 2942042112

Then you could export the model with

bash scripts/cycle_gan/horse2zebra_fast/export.sh 16_16_32_16_32_64_16_16