TMElyralab / MusePose

MusePose: a Pose-Driven Image-to-Video Framework for Virtual Human Generation
Other
1.87k stars 125 forks source link

Video test file Do you know what I did wrong #12

Closed CoolCuda closed 1 month ago

CoolCuda commented 1 month ago

hello

do you know why I have this result ? watch the video result : https://github.com/TMElyralab/MusePose/assets/150287877/76fff4bd-9759-46d8-8f7b-686a4732724f 123

thank you

czk32611 commented 1 month ago

hello

do you know why I have this result ? watch the video result : https://github.com/TMElyralab/MusePose/assets/150287877/76fff4bd-9759-46d8-8f7b-686a4732724f 123

thank you

The input video to test_stage_2.py should be a video of pose sequences. Currently, it seems that you input a video of rgb pixels.

CoolCuda commented 1 month ago

Thank you czk32611 :) Do you know where I can find any pose sequence video ?

czk32611 commented 1 month ago

Thank you czk32611 :) Do you know where I can find any pose sequence video ?

You can run pose alignment according to here, which will output pose sequence video

CoolCuda commented 1 month ago

Thank you my friend I will try this