I was running the code for both supervised only diva (paper_experiments/rotated_mnist/supervised/experiment_only_sup_diva.py)
and supervised only vae "paper_experiments/rotated_mnist/supervised/experiment_only_sup_vae.py", and I set the test_domain to "0". I noticed that the validation accuracy for both domain and y rose to 1.0 after several epochs. This is much higher than the test accuracy reported in the paper (~93%). Is there a reason for such a huge difference between the validation accuracy and the test accuracy? Thanks!
I realized that I misunderstood the validation accuracy. After taking a closer look the validation accuracies are on the same domain with the training set but the test set is on a new domain.
Hi, thanks for the great work!
I was running the code for both supervised only diva (paper_experiments/rotated_mnist/supervised/experiment_only_sup_diva.py) and supervised only vae "paper_experiments/rotated_mnist/supervised/experiment_only_sup_vae.py", and I set the test_domain to "0". I noticed that the validation accuracy for both domain and y rose to 1.0 after several epochs. This is much higher than the test accuracy reported in the paper (~93%). Is there a reason for such a huge difference between the validation accuracy and the test accuracy? Thanks!