Closed 21-10-4 closed 8 months ago
my config: train_data: csv_path: ../TikTok_info.csv video_folder:../TikTok_dataset/TikTok_dataset sample_size: 512 sample_stride: 4 sample_n_frames: 16 clip_model_path: openai/clip-vit-base-patch32 gradient_accumulation_steps: 128 batch_size: 1 use 1 V100, optimizer = torch.optim.SGD(trainable_params, lr=learning_rate / gradient_accumulation_steps, momentum=0.9) result: show the result of 20000 steps
Could it be because the 20,000 steps I have here are actually only equivalent to more than 300 steps when the batchsize is 64? or other reasons?
@21-10-4 have you ever resolved the issue?
my config: train_data: csv_path: ../TikTok_info.csv video_folder:../TikTok_dataset/TikTok_dataset sample_size: 512 sample_stride: 4 sample_n_frames: 16 clip_model_path: openai/clip-vit-base-patch32
gradient_accumulation_steps: 128 batch_size: 1 use 1 V100, optimizer = torch.optim.SGD(trainable_params, lr=learning_rate / gradient_accumulation_steps, momentum=0.9) result: show the result of 20000 steps
Could it be because the 20,000 steps I have here are actually only equivalent to more than 300 steps when the batchsize is 64? or other reasons?