Open ozhyo opened 1 year ago
I just random sampled the test set and it is different from MarioNETte's
Thanks for your reply. Is there any issue of comparison fairness when using different test data ?
I think no, you can sample more than one test set and average their results.
Thanks for your advice ! Maybe it is the best choice considering the lack of official code, model and test data of MarioNETte and MeshG. So can i use your test set in the folder '/data' and replicate the results reported in your paper to perform comparisons ?
Sure, no problem. Thanks for your interest in our work.
Sorry to bother you. There is another question. When evaluating PRMSE, the facepose.values have three components such as [[-8.0554281 -2.87242696 2.10754395]].
I checked the source codes and found the following: Returns: np.ndarray: (num_images, num_faces, [pitch, roll, yaw]) - Euler angles (in degrees) for each face within in each image in line 725 of detector.py from the installed module 'feat'.
Then i wonder whether only the rotation angles are considered for calcaulating PRMSE and the translations are missing ? And why ?
Yes, we only consider the rotation angles because the translation cannot be detected in a single image without the anchor image.
Hi, sorry to bother you, there is another question. I wonder how to generate results using the test data in the folder './data' such as '/data/celeV_cross_id_evaluation.csv'. It seems that the code in animate.py is used for generating videos rather than images. But the test data contains images. So how can we generate results using '/data/celeV_cross_id_evaluation.csv' ?
That file contains the path of source image and driving image, you can write a dataset class to extract the <source, driving> pair.
Thanks for your reply. When testing with source image and driving image, only absolute motion transfer can be performed due to the lack of 'best frame'. While the relative motion transfer is used to produce videos. So should we just replace relative motion transfer with absolute motion transfer to do testing ?
In my paper, I do it in that way. But I think it should be better if you could use the relative motion transfer with the help of the ``best frame''.
Thanks a lot. I'll follow your choice because other methods don't use 'best frame' either. Absolute motion transfer is fair to all methods.
Could you provide the generated results using the test data in '/data/celeV_cross_id_evaluation.csv' ?
Actually, the results in celebV are quite bad, because we don't train our network on celebV and just use the checkpoint trained on voxceleb1 to test the celebV.
Thanks for your reply, then what about the generated results using the test data in '/data/vox_cross_id_evaluation.csv' ? Could you provide the generated results ?
Sure, will upload it to onedrive later.
Thanks a lot. Are the generated results available now?
Please check this link: Vox Cross Id
Thanks! Could you please provide the original images without keypoints drawn on them?
No problem. Sorry for the delay. I am so busy with the paper rebuttal and CVPR.
Hello, thanks for releasing the code of this excellent work ! I have a question about evaluation and comparison with MarioNETte and MeshG. As mentioned in the paper, the test set sampling strategy follows that of MarioNETte. And the reported results of MarioNETte and MeshG are replicated from their original papers. So I wonder if the test set lists in the folder './data' such as '/data/celeV_cross_id_evaluation.csv' are the same as MarioNETte and MeshG. Looking forward to your reply ! Thanks !