Closed joe8767 closed 2 years ago
Sorry for the confusion! This is because our pre-trained final compressed model is not exported from this pre-trained supernet as the previous supernet was deprecated due to the outdated code. You could search for the subnet yourself by
python evolution_search.py --dataroot database/horse2zebra/trainA \
--dataset_mode single --phase train \
--restore_G_path pretrained/cycle_gan/horse2zebra_fast/supernet/latest_net_G.pth \
--output_dir logs/cycle_gan/horse2zebra_fast/supernet/evolution \
--ngf 64 --batch_size 32 \
--config_set channels-64-cycleGAN --mutate_prob 0.4 \
--real_stat_path real_stat/horse2zebra_B.npz --budget 3e9 \
--weighted_sample 2 --meta_path datasets/metas/horse2zebra/train2A.meta
You are supposed to find such a configuration
fid: 64.79, config_str: 16_16_32_16_32_64_16_16, macs: 2942042112
Then you could export the model with
bash scripts/cycle_gan/horse2zebra_fast/export.sh 16_16_32_16_32_64_16_16
Thank you for sharing your excellent research.
Following the steps described in “docs/tutorials/fast_gan_compression.md”, I downloaded the pretrained once-for-all supernet model using the provided commmand as follows:
After that, I directly use the provided command to export a compressed model.
export.sh:
Command and script for testing.
Testing the performance of the exported compressed model, it produces MACs: 2.641G, Params: 0.355M, and fid score: 96.50.
So my concern is, am I supposed to get a better fid at this step without any fine-tuning? (such as the fid reported in the table in README.md is 65.19). Are there any thing I missed?