LiyaoTang / ERDA

All Points Matter: Entropy-Regularized Distribution Alignment for Weakly-supervised 3D Segmentation (NeurIPS 2023)
MIT License
22 stars 1 forks source link

Question with parameter configuration #3

Open zivlzw opened 9 months ago

zivlzw commented 9 months ago

May I ask if the config here performs full supervision or weak supervision? python main.py -c config/s3dis/randla_erda.yaml --gpus 2 with pseudo-fout-pmlp2-mom-Gsum-normdot-w.1

In head.py we found the code: `class pseudo(Head): _attr_dict = {'_ops': [ 'fout-pmlp2|mom|normdot|w.1', 'fout-pmlp2|mom-Gavg|normdot|w.1', 'fout-pmlp2|mom-Gsum|normdot|w.1',

    'fout-pmlp2|mom-I|normdot|w.1',
    'fout-pmlp2|mom-I|normdot|w.01',
    'fout-pmlp2|mom-I-Gsum|normdot|w.1',
    'fout-pmlp2|mom-I-Gsum|normdot|w.01',

]}`

Does w.01 mean 0.01% weak supervision and w.1 mean 0.1% weak supervision? How about full supervision with ERDA ?

Looking forward to your reply, thank you

LiyaoTang commented 9 months ago

Hi,

Thanks for your interest!

The head.py contains configs for pseudo-label generation. According to the class definition, you could see that w.1 means the loss weight of ERDA loss is 0.1. This is normally the default setting but you could also try with loss weight being 0.01, that is w.01.

You could play around by changing these settings to different values that you think fit.

Best, Liyao

zivlzw commented 9 months ago

Hi,

Thanks for your interest!

The head.py contains configs for pseudo-label generation. According to the class definition, you could see that w.1 means the loss weight of ERDA loss is 0.1. This is normally the default setting but you could also try with loss weight being 0.01, that is w.01.

You could play around by changing these settings to different values that you think fit.

Best, Liyao

Thank you for your prompt reply. Do you mean that w is α in the formula in the figure? image

LiyaoTang commented 9 months ago

Yes, you are right.