LiJunnan1992 / MLNT

Meta-Learning based Noise-Tolerant Training
123 stars 29 forks source link

Update main.py #10

Closed Tverous closed 4 years ago

Tverous commented 4 years ago

From the original paper

Learning to Learn from Noisy Labeled Data

The algorithm states that the meta loss should be calculated after every consistency loss is accumulated:

image

While in the original code, the consistency loss does not accumulate before divide the number of mini-batch

image

This issue was mentioned in #7 before, but if the consistency_loss gradient is accumulated across M synthetic data

Then it should not be divided by M( args.num_fast in the source) every time after generating noisy labels

This pull request should fix the problems that appear in #8 and #9 as well.

The GPU memory usage problems should be resolved after decreasing the number in args.num_fast

LiJunnan1992 commented 4 years ago

Thanks for the update! I have merged the pull request.