Hao-Ning / MEIDTM-Instance-Dependent-Label-Noise-Learning-with-Manifold-Regularized-Transition-Matrix-Estimatio

pytorch
9 stars 2 forks source link

About distilling method #2

Open Sirius222 opened 1 year ago

Sirius222 commented 1 year ago

There are some problems with the code. Why do you choose 80% of the data in ascending order? specifically, -> def distilling() methods

https://github.com/Hao-Ning/MEIDTM-Instance-Dependent-Label-Noise-Learning-with-Manifold-Regularized-Transition-Matrix-Estimatio/blob/ef1493fcbca4c3fb8c4c98c542eae5d923cb3a29/run_ours.py#L72

also Why are the transition matrix network(Linear) parameters not randomly initialized?

randydkx commented 1 year ago

@Sirius222 Can you reproduce the results in the paper?I found the algorithm in the code is different from that in the paper.