mit-han-lab / gan-compression

[CVPR 2020] GAN Compression: Efficient Architectures for Interactive Conditional GANs
Other
1.1k stars 148 forks source link

Question about the budget setting #97

Closed joe8767 closed 2 years ago

joe8767 commented 2 years ago

Thanks for sharing the excellent research.

Searching from the pretrained supernet, I was able to get a configuration

fid: 69.31, config_str: 16_16_32_16_16_64_16_24, macs: 2984771584

But I'm a little bit confuse about the setting of the budget for evolution search. I noticed that in the cyclegan horse2zebra experiment, the command for evolution search uses a budget of 3e9, which is larger than the reported ones (around 2.6e9). After changing the budget to 2.6e9, the obtained configuration is

fid: 88.99, config_str: 16_16_16_48_64_64_24_16, macs: 2597322752

evolution_search.sh

#!/usr/bin/env bash
python evolution_search.py --dataroot database/horse2zebra/trainA \
  --dataset_mode single --phase train \
  --restore_G_path pretrained/cycle_gan/horse2zebra_fast/supernet/latest_net_G.pth \
  --output_dir logs/cycle_gan/horse2zebra_fast/supernet/evolution2p6 \
  --ngf 64 --batch_size 32 \
  --config_set channels-64-cycleGAN --mutate_prob 0.4 \
  --real_stat_path real_stat/horse2zebra_B.npz --budget 2.6e9 \
  --weighted_sample 2 --meta_path datasets/metas/horse2zebra/train2A.meta

Are there any suggestions for obtaining compressed models using the fast pipeline while achieving fid and macs that are similar to those reported in README.md?

lmxyy commented 2 years ago

This is caused by the variances of the supernet training. Since this supernet is different from the one from which the pre-trained final compressed model is exported, the results may be a little bit different. You could train the supernet with a different seed to see if you could get a better result. A model with computation <3e9 GMACs and FID < 70 is a reasonable result.

joe8767 commented 2 years ago

Got it. Many thanks for your quick reply.