Closed AtsushiHashimoto closed 5 years ago
Hi @AtsushiHashimoto,
As I've mentioned in README, the configs given here are just examples for UDA tasks,
for the specific config used in our paper, please refer to our supplementary material which can be found in NeurIPS 2019 proceeding
Also, please make sure that data augmentation is disabled (i.e. data_augment set to False
) for the case transferring between MNIST<-->USPS since they're sharing similar style (white digit black background)
Hope this information helps! Do let me know if you're able to reproduce or not, thanks!
Thank you so much for your quick reply! I will report the result!
Hi, I got the following results with Tesla V100. It seems still not so good as your result X( Any idea from you will help us greatly!
<< I removed my false report. Please check the next comment for a real result. >>
I'm not sure what is wrong with our config file... the following is the file.
exp_setting: exp_name: 'uda_example' # Expriment title, log/checkpoint files will be named after this checkpoint_dir: 'checkpoint/' # Folder for model checkpoints log_dir: 'log/' # Folder for training logs data_root: 'data/' seed: 123456 img_size: 32 img_depth: 3 source_domain: 'mnist' target_domain: 'usps' shuffle_source: False # Notice that here using # of images from SVHN equal as MNIST shuffle_target: True
model: vae: encoder: [['conv', 64,4,2,1,'bn','LeakyReLU'], ['conv', 128,4,2,1,'bn','LeakyReLU'], ['conv', 256,4,2,1,'bn','LeakyReLU'], ['conv', 512,4,2,1,'bn','LeakyReLU'], ['conv', 1024,4,2,1, '',''] ] code_dim: 2 decoder: [['conv', 512,4,2,1,'bn','LeakyReLU',True], ['conv', 256,4,2,1,'bn','LeakyReLU',False], ['conv', 128,4,2,1,'bn','LeakyReLU',False], ['conv', 64,4,2,1,'bn','LeakyReLU',False], ['conv', 3,4,2,1, '','Tanh',False] ] lr: 0.0001 betas: [0.5,0.999] D_feat: dnn: [['fc', 1024, '', 'LeakyReLU',0], ['fc', 256, '', 'LeakyReLU',0], ['fc', 2, '', '', 0] ] lr: 0.0001 betas: [0.5,0.999]
D_digit: dnn: [['fc', 10, '', '',0] ] lr: 0.0001 betas: [0.5,0.999]
trainer: total_step: 500000 batch_size: 16
lambda: pix_recon: init: 1 final: 1 step: 1 kl: init: 0.0000001 final: 0.0000001 step: 1 feat_domain: init: 0.1 final: 0.1 step: 1
data_augment: False # Augmentation : pixelwise flipping value verbose_step: 100 plot_step: 500 checkpoint_step: 5000 save_log: True show_fig: True save_fig: True save_checkpoint: True save_best_only: True
The difference of configuration between mnist->usps and usps->mnist.
$ diff config/uda_usps2mnist.yaml config/uda_mnist2usps.yaml
9,10c9,10
< source_domain: 'usps'
< target_domain: 'mnist'
---
> source_domain: 'mnist'
> target_domain: 'usps'
I'm really sorry for my previous false report. Now, I report the result though the order is not known... The following is the result from either of usps->mnist and mnist->usps, each is executed two times. 129000 0.9691999999999998 57000 0.9712999999999997 434500 0.9705555555555555 34000 0.9677777777777778
The config file is exactly same with what I pasted in the previous comment. Again, thank you for your great work!!!
Glad that I helped! Though your results seems a little overwhelming, in the paper, I’ve reported about 94% on USPS to MNIST (maybe due to the random seeds you picked or shuffle options since I never had the time to try them before submission deadline) But good to know anyway, thanks a lot for recognizing our work!
Hi, thank you for your great work!
On my PC, however, the accuracy of uda with the setting of "source: mnist, target: usps" achieves only around 65% of accuracies (despite the 97% accuracy reported in the paper). The environment is Tesla V100 with Driver Version 384.81.
I have just changed 'source_domain' and 'target_domain' parameters in 'config/uda_example.yaml', and usps->mnist achieves around 93% of accuracy.
If you have any secret spice for increasing the accuracy with mnist->usps setting, could you please give us exact config file for other source-domain combinations?