ZhenglinZhou / STAR

[CVPR 2023] STAR Loss: Reducing Semantic Ambiguity in Facial Landmark Detection
169 stars 20 forks source link

About reproducing the results of the paper #28

Closed Phil-Lin closed 6 months ago

Phil-Lin commented 8 months ago

Hi, I'm trying to reproduce your paper results, I followed your steps: python main.py --mode=train --device_ids=0,1,2,3 --image_dir=${image_dir} --annot_dir=${annot_dir} --data_definition=WFLW and my config is : loader_type: alignment loss_func: STARLoss_v2 batch_size: 128 val_batch_size: 32 test_batch_size: 16 channels: 3 width: 256 height: 256 means: (127.5, 127.5, 127.5) scale: 0.00784313725490196 display_iteration: 10 milestones: [200, 350, 450] max_epoch: 500 net: stackedHGnet_v1 nstack: 4 optimizer: adam learn_rate: 0.001 momentum: 0.01 weight_decay: 1e-05 nesterov: False scheduler: MultiStepLR gamma: 0.1 loss_weights: [0.125, 1.25, 1.25, 0.25, 2.5, 2.5, 0.5, 5.0, 5.0, 1.0, 10.0, 10.0] criterions: ['STARLoss_v2', 'AWingLoss', 'AWingLoss', 'STARLoss_v2', 'AWingLoss', 'AWingLoss', 'STARLoss_v2', 'AWingLoss', 'AWingLoss', 'STARLoss_v2', 'AWingLoss', 'AWingLoss'] metrics: ['NME', None, None, 'NME', None, None, 'NME', None, None, 'NME', None, None] key_metric_index: 9 classes_num: [98, 9, 98] label_num: 12 ema: True use_AAM: True writer: <tensorboardX.writer.SummaryWriter object at 0x7ff5ef0a76d0> logger: <RootLogger root (NOTSET)> data_definition: WFLW test_file: test.tsv aug_prob: 1.0 val_epoch: 1 valset: test.tsv norm_type: default encoder_type: default decoder_type: default betas: [0.9, 0.999] train_num_workers: 16 val_num_workers: 16 test_num_workers: 0 add_coord: True star_w: 1 star_dist: smoothl1 edge_info: ((False, (0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32)), (True, (33, 34, 35, 36, 37, 38, 39, 40, 41)), (True, (42, 43, 44, 45, 46, 47, 48, 49, 50)), (False, (51, 52, 53, 54)), (False, (55, 56, 57, 58, 59)), (True, (60, 61, 62, 63, 64, 65, 66, 67)), (True, (68, 69, 70, 71, 72, 73, 74, 75)), (True, (76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87)), (True, (88, 89, 90, 91, 92, 93, 94, 95))) nme_left_index: 60 nme_right_index: 72 crop_op: True

However, the model accuracy (NME) obtained by my training is only 4.08. Do you know what went wrong?

image

ZhenglinZhou commented 7 months ago

Hi @Phil-Lin, thanks for your interest!

You can try with batch_size=32. The experiments of batch_size are listed below, we report two results under different random seeds:

Batch Size WFLW (NME)
128 4.05 / 4.09
64 4.05 / 4.06
32 4.03 / 4.02

If you have any other questions, feel free to leave a comment.

s0966066980 commented 5 months ago

Hello, I also have the same problem. It is my first time to come into contact with this project. The model accuracy (NME) I got from training is Final NME: 0.040189. Represents NME=4.01?