AnthonyF333 / PFLD_GhostOne

Apache License 2.0
76 stars 15 forks source link

Could you plz share the basic config the best performance pretrained model use? #2

Closed Yuhyeong closed 2 years ago

Yuhyeong commented 2 years ago

I'm working on training 106 pts with ghost-one. I use the origin config to train it.

2022-11-01 01:24:46,444:INFO: SEED: 2023
2022-11-01 01:24:46,444:INFO: DEVICE: cuda
2022-11-01 01:24:46,444:INFO: GPU_ID: 0
2022-11-01 01:24:46,444:INFO: TRANSFORM: Compose(
    ToTensor()
    Normalize(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5])
)
2022-11-01 01:24:46,444:INFO: MODEL_TYPE: PFLD_GhostOne
2022-11-01 01:24:46,444:INFO: INPUT_SIZE: [112, 112]
2022-11-01 01:24:46,444:INFO: WIDTH_FACTOR: 1
2022-11-01 01:24:46,444:INFO: LANDMARK_NUMBER: 106
2022-11-01 01:24:46,444:INFO: TRAIN_BATCH_SIZE: 32
2022-11-01 01:24:46,444:INFO: VAL_BATCH_SIZE: 8
2022-11-01 01:24:46,444:INFO: TRAIN_DATA_PATH: ./data/train.txt
2022-11-01 01:24:46,444:INFO: VAL_DATA_PATH: ./data/val.txt
2022-11-01 01:24:46,444:INFO: EPOCHES: 80
2022-11-01 01:24:46,445:INFO: LR: 0.0001
2022-11-01 01:24:46,445:INFO: WEIGHT_DECAY: 1e-06
2022-11-01 01:24:46,445:INFO: NUM_WORKERS: 8
2022-11-01 01:24:46,445:INFO: MILESTONES: [55, 65, 75]

And after 80 epoches then got the best weight like

2022-11-01 09:56:28,787:INFO: Save best model
2022-11-01 09:56:28,953:INFO: Train_Loss: 6.268509387969971
2022-11-01 09:56:28,954:INFO: Val_Loss: 9.945393562316895
2022-11-01 09:56:28,955:INFO: Val_NME: 5.3965402340156405

*The result on my 112112 pic is almost a thick line across the whole pic, all pts gathered into a line I thought it was underfitting, so I set lr as 1e-7, weight_decay as 1e-9, run another 80 epoches. But the loss doesn't minus at all**

So could you plz share your pretained model training config, or your actions taken in the process?

Yuhyeong commented 2 years ago

sorry the problem appear in perprocessing period, I have already fixed it.