Open Andy010902 opened 1 month ago
Thank you for your kind words and for your insightful question regarding the PFC index.
The primary reason for the differences in the PFC values you observed is due to the varying lengths of the generated dance sequences used in testing. In the original EDGE paper, they likely tested with a shorter duration of around 5 seconds (150 frames). In my work, however, the test sequences were significantly longer, which can lead to different PFC values.
I hope this explanation helps clarify the observed changes.
@Luke-Luo1 Thanks for your kind assistance. I wonder how to compute the full_pose of GT .pkls in ./data/test/motions? When I use test.py to compute it, there are key error as "full_pose", and I didn't find the solution in readme.md. Best wishes! Andy
Hi Andy,
The issue you're encountering is likely due to slight differences in the data format between the generated dance data and the dataset. The test.py
script I provided is primarily designed for evaluating generated dance sequences, which share a similar format with the original data but have some minor differences.
To resolve this, you may need to make some adjustments to the paths or key names in the script. And understanding the SMPL (Skinned Multi-Person Linear) body model format will also be helpful, as it will give you insight into how the data is structured and what modifications might be necessary.
Best wishes,
Zhenye Luo
Thank you for your kind words and for your insightful question regarding the PFC index.
The primary reason for the differences in the PFC values you observed is due to the varying lengths of the generated dance sequences used in testing. In the original EDGE paper, they likely tested with a shorter duration of around 5 seconds (150 frames). In my work, however, the test sequences were significantly longer, which can lead to different PFC values.
I hope this explanation helps clarify the observed changes.
Hey, thanks for your kind explanation! When I followed EDGE, I found the pfc of GT (../test/motions_sliced) could be reproduced by doing downsampling (60fps -> 30fps), so I thought the input for evaluation is "../test/wavs_sliced", which is 5s (150 frames) as u mentioned above. Then u claimed u used longer sequences for evaluation, especially POPDG, do u used "../test/wavs" as input which has varying lengths. Besides, what confused me most is: the pfc of GT could be reproduced with "../test/motions_sliced" as input, but the pfc of methods such as EDGE and POPDG is hard ro be reproduced by "test.py" with "../test/wavs_sliced" being input. So, when it comes to the evaluation of methods(POPDG & EDGE), what's the input? "../test/wavs_sliced" or "../test/wavs"? Are the two same or not? Please let me know if there is any deviation in my viewpoint mentioned above. Hope u all the best! Andy
Hi, @Luke-Luo1 Thank you for your amazing work! I have a question about the pfc. I noticed that pfc in EDGE is 1.53 and the GT is 1.33:
And in POPDG, the pfc index of edge is 0.92, GT is 0.31, and POPDG is 0.80 with aist++ dataset unchanged, so I wonder what causes the pfc changes.
Best wishes! Andy