magic-research / magic-animate

[CVPR 2024] MagicAnimate: Temporally Consistent Human Image Animation using Diffusion Model
https://showlab.github.io/magicanimate/
BSD 3-Clause "New" or "Revised" License
10.5k stars 1.08k forks source link

when will your densepose extractor be open-sourced #130

Closed yyyouy closed 10 months ago

yyyouy commented 11 months ago

Thanks for your excellent work. I would like to ask, when will your densepose extractor be open-sourced? I used the densepose extractor in Simple Magic Animate and the evaluation results (fid, fvd) on the TED-Talk dataset are quite different from the results you published.

I wanted to inquire if the difference is due to the densepose extraction itself.

zcxu-eric commented 11 months ago

Hi, we used detectron2 library to estimate densepose, and the checkpoint we released was not trained on TED-Talk dataset.

Delicious-Bitter-Melon commented 11 months ago

Hi, we used detectron2 library to estimate densepose, and the checkpoint we released was not trained on TED-Talk dataset.

I am also interested in the pre-trained model trained on the TED-Talk dataset. Do you have a plan to release this checkpoint? Thank you very much.

yyyouy commented 11 months ago

Hi, we used detectron2 library to estimate densepose, and the checkpoint we released was not trained on TED-Talk dataset.

May I ask if the results mentioned in your paper regarding the TED-Talk dataset were obtained by training on the TED-Talk training set and then testing on the TED-Talk test set? And I am also interested in the color discrepancy between the background of the densepose results extracted by Detectron2, which appear as black, and the ones presented by your team, which appear as purple?

zcxu-eric commented 11 months ago

Hi, we used detectron2 library to estimate densepose, and the checkpoint we released was not trained on TED-Talk dataset.

May I ask if the results mentioned in your paper regarding the TED-Talk dataset were obtained by training on the TED-Talk training set and then testing on the TED-Talk test set? And I am also interested in the color discrepancy between the background of the densepose results extracted by Detectron2, which appear as black, and the ones presented by your team, which appear as purple?

Yes, it was trained on TED-Talk. The detectron2 has different visualizers, we use the semantic map one, and its background is purple.

yyyouy commented 11 months ago

Yes, it was trained on TED-Talk. The detectron2 has different visualizers, we use the semantic map one, and its background is purple.

Thank you very much. Do you have a plan to release this checkpoint?

yyyouy commented 11 months ago

Yes, it was trained on TED-Talk. The detectron2 has different visualizers, we use the semantic map one, and its background is purple.

We have been utilizing the DensePose with the following command:

python apply_net.py show configs/densepose_rcnn_R_50_FPN_s1x.yaml https://dl.fbaipublicfiles.com/densepose/densepose_rcnn_R_50_FPN_s1x/165712039/model_final_162be9.pkl image_path dp_segm -v

Furthermore, we have experimented with various visualizers listed below:

"dp_contour": DensePoseResultsContourVisualizer, "dp_segm": DensePoseResultsFineSegmentationVisualizer, "dp_u": DensePoseResultsUVisualizer, "dp_v": DensePoseResultsVVisualizer, "dp_iuv_texture": DensePoseResultsVisualizerWithTexture, "dp_cse_texture": DensePoseOutputsTextureVisualizer, "dp_vertex": DensePoseOutputsVertexVisualizer, "bbox": ScoredBoundingBoxVisualizer,

However, we noticed an issue where the background appears black instead of purple. Could you possibly shed light on why this might be happening?

zcxu-eric commented 11 months ago

Hi, we used detectron2 library to estimate densepose, and the checkpoint we released was not trained on TED-Talk dataset.

I am also interested in the pre-trained model trained on the TED-Talk dataset. Do you have a plan to release this checkpoint? Thank you very much.

yes, we will release this ckpt

zcxu-eric commented 11 months ago

Yes, it was trained on TED-Talk. The detectron2 has different visualizers, we use the semantic map one, and its background is purple.

We have been utilizing the DensePose with the following command:

python apply_net.py show configs/densepose_rcnn_R_50_FPN_s1x.yaml https://dl.fbaipublicfiles.com/densepose/densepose_rcnn_R_50_FPN_s1x/165712039/model_final_162be9.pkl image_path dp_segm -v

Furthermore, we have experimented with various visualizers listed below:

"dp_contour": DensePoseResultsContourVisualizer, "dp_segm": DensePoseResultsFineSegmentationVisualizer, "dp_u": DensePoseResultsUVisualizer, "dp_v": DensePoseResultsVVisualizer, "dp_iuv_texture": DensePoseResultsVisualizerWithTexture, "dp_cse_texture": DensePoseOutputsTextureVisualizer, "dp_vertex": DensePoseOutputsVertexVisualizer, "bbox": ScoredBoundingBoxVisualizer,

However, we noticed an issue where the background appears black instead of purple. Could you possibly shed light on why this might be happening?

plz use "dp_segm" and change the black background to a canvas filled with RGB: (84, 1, 68).

Delicious-Bitter-Melon commented 10 months ago

Hi, we used detectron2 library to estimate densepose, and the checkpoint we released was not trained on TED-Talk dataset.

I am also interested in the pre-trained model trained on the TED-Talk dataset. Do you have a plan to release this checkpoint? Thank you very much.

yes, we will release this ckpt

Thanks for your reply. Do you compute FID between 100 generated images and the corresponding 100 real images for each video, and then average it over all videos? Or directly compute FID between all generated images (100 x the number of videos) and real images (100 x the number of videos)?

Worromots commented 10 months ago

Hi, we used detectron2 library to estimate densepose, and the checkpoint we released was not trained on TED-Talk dataset.

I am also interested in the pre-trained model trained on the TED-Talk dataset. Do you have a plan to release this checkpoint? Thank you very much.

yes, we will release this ckpt

I am also looking forward this ckpt.