guoqincode / Open-AnimateAnyone

Unofficial Implementation of Animate Anyone
2.89k stars 233 forks source link

about the result of the first stage #47

Closed 21-10-4 closed 8 months ago

21-10-4 commented 8 months ago

my config: train_data: csv_path: ../TikTok_info.csv video_folder:../TikTok_dataset/TikTok_dataset sample_size: 512 sample_stride: 4 sample_n_frames: 16 clip_model_path: openai/clip-vit-base-patch32
gradient_accumulation_steps: 128 batch_size: 1 use 1 V100, optimizer = torch.optim.SGD(trainable_params, lr=learning_rate / gradient_accumulation_steps, momentum=0.9) result: show the result of 20000 steps image

Could it be because the 20,000 steps I have here are actually only equivalent to more than 300 steps when the batchsize is 64? or other reasons?

garychan22 commented 7 months ago

@21-10-4 have you ever resolved the issue?