YU1ut / MixMatch-pytorch

Code for "MixMatch - A Holistic Approach to Semi-Supervised Learning"
MIT License
636 stars 129 forks source link

Implementation of the Lamba_u is not correct #6

Closed wang3702 closed 5 years ago

wang3702 commented 5 years ago

_match *= tf.clip_by_value(tf.cast(self.step, tf.float32) / (warmup_kimg << 10), 0, 1) here warmup_kimg=1024, that's to say, it should be _match%=step/1048576 Yours: def linear_rampup(current, rampup_length=16): if rampup_length == 0: return 1.0 else: current = np.clip(current / rampup_length, 0.0, 1.0) return float(current)

YU1ut commented 5 years ago

1048576/(64-batch_size * 1024-iteration_per_epoch) = 16-epoch. Am I wrong? The step in their code is updated by

self.ops.update_step = tf.assign_add(self.step, FLAGS.batch)

https://github.com/google-research/mixmatch/blob/master/libml/train.py#L51

wang3702 commented 5 years ago

For your batch size, no. However, considering the situation user will give different batch_size. Therefore, your updating strategy is not correct. Also, please notice google's actual training batch size is also different from you.(See their paper)

YU1ut commented 5 years ago

OK. I will fix it. But training batch size is always 64 in all experiments. https://github.com/google-research/mixmatch/blob/master/mixmatch.py#L144