Open zivlzw opened 1 year ago
Hi,
Thanks for your interest!
The head.py
contains configs for pseudo-label generation. According to the class definition, you could see that w.1 means the loss weight of ERDA loss is 0.1. This is normally the default setting but you could also try with loss weight being 0.01, that is w.01.
You could play around by changing these settings to different values that you think fit.
Best, Liyao
Hi,
Thanks for your interest!
The
head.py
contains configs for pseudo-label generation. According to the class definition, you could see that w.1 means the loss weight of ERDA loss is 0.1. This is normally the default setting but you could also try with loss weight being 0.01, that is w.01.You could play around by changing these settings to different values that you think fit.
Best, Liyao
Thank you for your prompt reply. Do you mean that w is α in the formula in the figure?
Yes, you are right.
May I ask if the config here performs full supervision or weak supervision?
python main.py -c config/s3dis/randla_erda.yaml --gpus 2
withpseudo-fout-pmlp2-mom-Gsum-normdot-w.1
In head.py we found the code: `class pseudo(Head): _attr_dict = {'_ops': [ 'fout-pmlp2|mom|normdot|w.1', 'fout-pmlp2|mom-Gavg|normdot|w.1', 'fout-pmlp2|mom-Gsum|normdot|w.1',
Does w.01 mean 0.01% weak supervision and w.1 mean 0.1% weak supervision? How about full supervision with ERDA ?
Looking forward to your reply, thank you