Closed phphuc612 closed 7 months ago
Thank you for your attention. The batch_size should match the number of your GPU. For instance, with 4 GPUs, the batch_size should be 4. Due to the impact of the frame length seqlen
, training EAT with a batch_size greater than 1 on one GPU has not been implemented.
Hi @yuangan , I have troubles in training Emotional Adaption with 1 GPU and the runtime errors were found due to mismatching dimension. Thank you for your great work and your time to help me out.
Environment diff from README:
device_ids 0
.Errors
1. Mismatch shapes in
face_feature_map
face_feature_map
is not as expected transformer.py#L807.2. Unexpected case
batch_size=1
in deepprompt_eam3d_st_tanh_304_3090_all.yaml#L70, it can train normally.