XingliangJin / MCM-LDM

[CVPR 2024] Arbitrary Motion Style Transfer with Multi-condition Motion Latent Diffusion Model
21 stars 3 forks source link

RuntimerError: expected scalar type Float but found Double #5

Closed mengW6 closed 1 month ago

mengW6 commented 1 month ago

Hello, I encountered this error while running the demo. Do you know how to solve it? The command line to run is as follows: python demo_transfer.py --cfg ./configs/config_mld_humanml3d.yaml --cfg_assets ./configs/assets.yaml --style_motion_dir demo/style_motion --content_motion_dir demo/content_motion --scale 2.5

图片1
XingliangJin commented 1 month ago

Seems like there is data type error, I can't reproduce this error. You can try to use trans_cond = trans_cond.to(torch.float32) befor this line.

mengW6 commented 1 month ago

Now there is another problem, as shown in the picture below. Can you give me some guidance? Also, may I ask if the version number of matplotlib must be version 3.1.3? I am using version 3.8.4.

图片2
XingliangJin commented 1 month ago

Yes, this is because the version of the matplotlib is too high. 3.1.3 is fine, you can also try other low versions.

mengW6 commented 1 month ago

1、May I ask where the content_mation and style_motion in the demo come from and how they were obtained? 2、 I want to use your model and method to generate trajectory and style controlled dances based on music ,can the Multi condition Motion Latent Diffusion Model network adapt to such inputs(The following figure)? Do you think it can be achieved ? If you could give me some advice, that would be very grateful.

图片1
XingliangJin commented 1 month ago
  1. The input motion is 263-dim format data from the HumanML3D dataset (in the new_joint_vecs floder). If you have your customed SMLP-based motion, you can use the processing code in HumanML3D to get 263-dim format motion.
  2. Our trajectory condition is used to make the transferred motion's trajectory to fit the original content motion and meanwhile avoid footsliding issue. So it mast be extracted from content motion. Of course, we have also tried to use other trajectories to control the generated motion to produce different trajectories. But the results are not good.
  3. It is a good idea to transfer the style to the dance sequences. In fact, there are many dance sequences in HumanML3D. I suggest you can try it.