AMLab-Amsterdam / DIVA

Implementation of 'DIVA: Domain Invariant Variational Autoencoders'
MIT License
98 stars 19 forks source link

Higher accuracy than reported in the paper #1

Closed xyzhangfred closed 4 years ago

xyzhangfred commented 4 years ago

Hi, thanks for the great work!

I was running the code for both supervised only diva (paper_experiments/rotated_mnist/supervised/experiment_only_sup_diva.py) and supervised only vae "paper_experiments/rotated_mnist/supervised/experiment_only_sup_vae.py", and I set the test_domain to "0". I noticed that the validation accuracy for both domain and y rose to 1.0 after several epochs. This is much higher than the test accuracy reported in the paper (~93%). Is there a reason for such a huge difference between the validation accuracy and the test accuracy? Thanks!

xyzhangfred commented 4 years ago

I realized that I misunderstood the validation accuracy. After taking a closer look the validation accuracies are on the same domain with the training set but the test set is on a new domain.