johannakarras / DreamPose

Official implementation of "DreamPose: Fashion Image-to-Video Synthesis via Stable Diffusion"
MIT License
962 stars 73 forks source link

Demo run issue #41

Open RunqiWang77 opened 1 year ago

RunqiWang77 commented 1 year ago

Hello, the display effect of dreampose on the project page is very exciting. However, I have noticed some issues when running it. As others have mentioned, no matter how the input image is modified, the output image is always of the same person.

As far as I know, dreampose's training process is supervised, where densepose and a fixed image are input to generate the corresponding pose image. This result has a ground truth (GT). Does this mean that dreampose is overfitting the estimated densepose and corresponding image frames? If so, the usefulness of dreampose will be limited because it would require finetuning for every video that has already been captured, and it cannot react to new person images. So, if I have already captured a fashion show video, why should I use dreampose to generate it again?

dazmashaly commented 1 year ago

i also have the same problem , i noticed that it outputs that same image but the dress color is different

LaiaTarres commented 1 year ago

I have the same issue... the dress color changes and some of the shape, but the identity is not preserved. Do the authors have any tips on this? Or is this the expected behaviour?