JiahuiLei / GART

GART: Gaussian Articulated Template Models
https://www.cis.upenn.edu/~leijh/projects/gart/
MIT License
243 stars 12 forks source link

training condition for own dataset #13

Closed serizawa-04013958 closed 2 months ago

serizawa-04013958 commented 5 months ago

Hello I tried neuman dataset using UBC-dataset training condition and peoplesnapshot dataset training condition. but the results was not so good. Should we fine tune hyper parameters by ourselves? I'd like to know how to optimize i, or this is a expected results? da-pose

JiahuiLei commented 2 months ago

Thanks for your question, this looks a little worse than our reported results like below: da-pose

Seems I forget to provide the neuman profile in the repo, here is the profile config I used for neuman

TOTAL_steps: 3000 #15000 #30000
SEED: 12345
VIZ_INTERVAL: 500

CANO_POSE_TYPE: da_pose #da_pose #t_pose #da_pose
VOXEL_DEFORMER_RES: 64 #64 #128 #64 #128 #64

W_CORRECTION_FLAG: True
W_REST_DIM: 4 #0 #16
W_REST_MODE: pose-mlp #delta-list #pose-mlp

W_MEMORY_TYPE: voxel

MAX_SCALE: 1.0 #0.03
MIN_SCALE: 0.0 #0.0003 #0.003 #3
MAX_SPH_ORDER: 1
INCREASE_SPH_STEP: [2000]

INIT_MODE: on_mesh #near_mesh #near_mesh
OPACITY_INIT_VALUE: 0.99

ONMESH_INIT_SUBDIVIDE_NUM: 0
ONMESH_INIT_SCALE_FACTOR: 1.0
ONMESH_INIT_THICKNESS_FACTOR: 0.5

NEARMESH_INIT_NUM: 10000
NEARMESH_INIT_STD: 0.1
SCALE_INIT_VALUE: 0.01 # only used for random init

###########################

LR_P: 0.00016
LR_P_FINAL: 0.0000016
LR_Q: 0.005
LR_S: 0.005
LR_O: 0.05
LR_SPH: 0.005

W_START_STEP: 1500 #1000 #500 #2000 #300 #2000
LR_W: 0.0001 # 1 # 0.00001
LR_W_FINAL: 0.00001

LR_W_REST: 0.00003
LR_W_REST_FINAL: 0.000003
# LR_W_REST_BONES: 0.0003 # for mlp
LR_W_REST_BONES: 0.00003 # for mlp

LR_F_LOCAL: 0.0

###########################

POSE_OPTIM_MODE: adam
POSE_R_BASE_LR: 0.0001
POSE_R_BASE_LR_FINAL: 0.0001
POSE_R_REST_LR: 0.0001
POSE_R_REST_LR_FINAL: 0.0001
POSE_T_LR: 0.0001
POSE_T_LR_FINAL: 0.0001

POSE_OPTIMIZE_START_STEP: 1500 #500 #1000

# Reg Terms
LAMBDA_MASK: 0.0 #0.01
MASK_LOSS_PAUSE_AFTER_RESET: 100

# other optim
N_POSES_PER_STEP: 1 #50 #1 #3 # increasing this does not help
RAND_BG_FLAG: True
# DEFAULT_BG: [0.0, 0.0, 0.0]
NOVEL_VIEW_PITCH: 0.0
IMAGE_ZOOM_RATIO: 1.0 #0.5
VIEW_BALANCE_FLAG: False
BOX_CROP_PAD: 20

# GS Control
# densify
MAX_GRAD: 0.0001 #0.0003 #0.0005 #0.0006 # 0.0002
PERCENT_DENSE: 0.01
DENSIFY_START: 100
DENSIFY_INTERVAL: 401 #300 #500 #1000 #300
DENSIFY_END: 2500 #10000 #15000
# prune
PRUNE_START: 200
PRUNE_INTERVAL: 401
OPACIT_PRUNE_TH: 0.05
RESET_OPACITY_STEPS: [2002]
OPACIT_RESET_VALUE: 0.05
# regaussian
REGAUSSIAN_STD: 0.02 #0.02 #0.02 #0.01
REGAUSSIAN_STEPS: [] #3000 #[] #[3502, 6502, 9502, 12502] #[3502, 5502, 7502] #[3502, 4502]

CANONICAL_SPACE_REG_K: 16 #16
LAMBDA_STD_Q: 0.03
LAMBDA_STD_S: 0.03
LAMBDA_STD_O: 0.03
LAMBDA_STD_CD: 0.03
LAMBDA_STD_CH: 0.03
LAMBDA_STD_W: 0.3
LAMBDA_STD_W_REST: 0.5

LAMBDA_SMALL_SCALE: 0.05 #5

LAMBDA_W_NORM: 0.3
LAMBDA_W_REST_NORM: 0.3

LAMBDA_LPIPS: 0.0 # 0.001
LAMBDA_SSIM: 0.1

START_END_SKIP: [0, 10000, 1] # ! use all!
serizawa-04013958 commented 2 months ago

Thank you for sharing info!! I could confirm same results, so let me close this issue