This repository contains code for the paper Learning by Association - A versatile semi-supervised training method for neural networks (CVPR 2017) and the follow-up work Associative Domain Adaptation (ICCV 2017).
Hi,
I'm trying to obtain results for the experiment "svhn --> mnist", as you published in your paper only the case "svhn --> mnist".
I used the hyperparameter you gave me, but I failed to reproduce the result on the paper.
After running eval.py, I only get an accuracy of about 95.66% .
In your paper, in this case 0.51 errors(%) (shown Table 5). is it means an accuracy of 99.5%?
Is there something I'm missing here?
I also refer to your response. (issues/3) .
alright so I re-ran the training myself again and everything seems fine. I uploaded for you the logs including hyper params and TFEvents so you can visualize the graph with TensorBoard: https://vision.in.tum.de/~haeusser/da_svhn_mnist.zip
flags of eval.py
flags.DEFINE_string('dataset', 'mnist3', 'Which dataset to work on.')
flags.DEFINE_string('architecture', 'svhn_model', 'Which dataset to work on.')
flags.DEFINE_integer('eval_batch_size', 500, 'Batch size for eval loop.')
flags.DEFINE_integer('new_size', 32, 'If > 0, resize image to this width/height.'
'Needs to match size used for training.')
flags.DEFINE_integer('emb_size', 128,
'Size of the embeddings to learn.')
flags.DEFINE_integer('eval_interval_secs', 300,
'How many seconds between executions of the eval loop.')
flags.DEFINE_string('logdir', '/storage/transfer_learning/log2/semisup',
'Where the checkpoints are stored '
'and eval events will be written to.')
flags.DEFINE_string('master', '',
'BNS name of the TensorFlow master to use.')
flags.DEFINE_integer('timeout', 1200,
'The maximum amount of time to wait between checkpoints. '
'If left as None, then the process will wait '
'indefinitely.')
Hi, I'm trying to obtain results for the experiment "svhn --> mnist", as you published in your paper only the case "svhn --> mnist".
I used the hyperparameter you gave me, but I failed to reproduce the result on the paper. After running eval.py, I only get an accuracy of about 95.66% . In your paper, in this case 0.51 errors(%) (shown Table 5). is it means an accuracy of 99.5%? Is there something I'm missing here?
I also refer to your response. (issues/3) .
I visuallized your log.
The accuracy here is 97.59%. It is different from the result of your paper.( 0.51 errors(%) (shown Table 5))
Any thoughts, or exact instructions on how to replicate any of the results from the paper, would be greatly appreciated.
Hyemin