Open Mr-Kyrie opened 1 year ago
Yes, your understanding is correct. Model is randomly initialized after each self-training round and predicts the new pseudo-labels.
Yes, your understanding is correct. Model is randomly initialized after each self-training round and predicts the new pseudo-labels.
When I was training with gssl, the effect of training the gssl model was poor when the unlabeled data was marked as cls3, but the effect was normal when the unlabeled data was marked as std. This would lead to a problem, a certain stage of task training When the effect is poor, the subsequent training effect will become worse and worse.How to solve this Problem?
Are you training on a different dataset or domain? Did you change any code?
Are you training on a different dataset or domain? Did you change any code?
Training code is not changed,I only change training datasets usinig my face data and training in face landmark domain.But the effect of training the gssl model was poor when the unlabeled data was marked as cls3.
I see. It is hard to diagnose the problem. You may try debugging to find the reason that causes the poor perofrmance, or maybe some hyper-params need to be modified.
Hello, I would like to consult the relevant details of gssl training. May I ask whether a net is initialized for each task when using gssl training in the paper, and is the latest training net predicted for each pseudo-label?