ildoonet / pytorch-randaugment

Unofficial PyTorch Reimplementation of RandAugment.
MIT License
624 stars 98 forks source link

The prob of applying augmentation? #20

Open ghost opened 3 years ago

ghost commented 3 years ago

Hi and thanks for this awesome repo.

I just checked the original TensorFlow implementation and found a part different from them. In the original implementation. There is a probability of applying and not applying the augmentation. But I did not find it in this repo.

The link for TensorFlow version: https://github.com/tensorflow/tpu/blob/5144289ba9c9e5b1e55cc118b69fe62dd868657c/models/official/efficientnet/autoaugment.py#L532

Original: with tf.name_scope('randauglayer{}'.format(layer_num)): for (i, op_name) in enumerate(available_ops): prob = tf.randomuniform([], minval=0.2, maxval=0.8, dtype=tf.float32) func, , args = _parse_policy_info(op_name, prob, random_magnitude, replace_value, augmentation_hparams)

this repo: ops = random.choices(self.augment_list, k=self.n)

print (ops)

    for op, minval, maxval in ops:
        val = (float(self.m) / 30) * float(maxval - minval) + minval
        img = op(img, val)

May I ask is there any reason for this? Or is there any part I missing?

Thanks in advance

JiyueWang commented 3 years ago

In addition, the Identity operation is not included