Closed Yazdi9 closed 8 months ago
Thank you for your attention. It seems the batch_size is not proper. You can set batch_size here to 1 if you have only one GPU. In my configuration, the batch_size should match the number of GPUs. If not the case, feel free to discuss it with me. Here is another issue for reference: https://github.com/yuangan/EAT_code/issues/18#issuecomment-1926494544
deepprompt_eam3d_all_final_313 cuda is available /usr/local/lib/python3.10/dist-packages/torch/functional.py:568: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:2228.) return _VF.meshgrid(tensors, *kwargs) # type: ignore[attr-defined] 0% 0/1 [00:00<?, ?it/s] 0% 0/20 [00:00<?, ?it/s] 0% 0/1 [00:00<?, ?it/s] Traceback (most recent call last): File "/content/drive/MyDrive/EAT_code/demo.py", line 467, in
test(f'./ckpt/{name}.pth.tar', args.emo, save_dir=f'./demo/output/{name}/')
File "/content/drive/MyDrive/EAT_code/demo.py", line 396, in test
he_driving_emo_xi, input_st_xi = audio2kptransformer(xi, kp_canonical, emoprompt=emoprompt, deepprompt=deepprompt, side=True) # {'yaw': yaw, 'pitch': pitch, 'roll': roll, 't': t, 'exp': exp}
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call( input, *kwargs)
File "/content/drive/MyDrive/EAT_code/modules/transformer.py", line 775, in forward
hp = self.rotation_and_translation(x['he_driving'], bbs, bs)
File "/content/drive/MyDrive/EAT_code/modules/transformer.py", line 763, in rotation_and_translation
yaw = headpose_pred_to_degree(headpose['yaw'].reshape(bbsbs, -1))
File "/content/drive/MyDrive/EAT_code/modules/transformer.py", line 478, in headpose_pred_to_degree
degree = torch.sum(pred*idx_tensor, axis=1)
RuntimeError: The size of tensor a (165) must match the size of tensor b (66) at non-singleton dimension 1