Open chen-rn opened 10 months ago
I tried to run Vid2DensePose via Huggingface on one of the original demo videos by MagicAnimate(see tweet below, left one) https://twitter.com/julesterpak/status/1731452110686892332
But the resulting video was significantly worse than what MagicAnimate provided. see vid below. https://github.com/Flode-Labs/vid2densepose/assets/36214945/f5aea417-aeca-4e48-b630-55b939336e4c
Are there additional configurations we can do to improve the output?
I tried to run Vid2DensePose via Huggingface on one of the original demo videos by MagicAnimate(see tweet below, left one) https://twitter.com/julesterpak/status/1731452110686892332
But the resulting video was significantly worse than what MagicAnimate provided. see vid below. https://github.com/Flode-Labs/vid2densepose/assets/36214945/f5aea417-aeca-4e48-b630-55b939336e4c
Are there additional configurations we can do to improve the output?