HumanAIGC / AnimateAnyone

Animate Anyone: Consistent and Controllable Image-to-Video Synthesis for Character Animation
Apache License 2.0
14.23k stars 952 forks source link

3 major red flags that makes me suspicious this is fake #10

Open FurkanGozukara opened 8 months ago

FurkanGozukara commented 8 months ago

All woman and attractive

So good consistency. We don't have anything close even for video to animation

And of course not even code released

By the way fake means it doesn't work as it is advertised

ReEnMikki commented 8 months ago

What goal would they achieve by doing this ? 🤔

ghost commented 8 months ago

Point 1 is standard advertising fare, and with point 2 the consistency isn't perfect and has some of the qualities of current Stable Diffusion models. This model and code+results is most definitely real. Also people's clearly visibile impatience and insatiable hunger to make porn with this is so absurdly hilarious to me that I am enjoying every day it doesn't come out. Everybody needs to chill out.

In the meantime, https://github.com/showlab/MotionDirector just came out with their code and models as well as trainer.

ggenny commented 8 months ago

The article doesn't seem fake to me, and it doesn't seem like they have invented anything new, in the end, they made a variant of ControlNet by adding the missing temporal information.

FurkanGozukara commented 8 months ago

Point 1 is standard advertising fare, and with point 2 the consistency isn't perfect and has some of the qualities of current Stable Diffusion models. This model and code+results is most definitely real. Also people's clearly visibile impatience and insatiable hunger to make porn with this is so absurdly hilarious to me that I am enjoying every day it doesn't come out. Everybody needs to chill out.

In the meantime, https://github.com/showlab/MotionDirector just came out with their code and models as well as trainer.

thanks that looks like stable video. a little bit worse version

however this AnimateAnyone claimed videos are so another level . i bet they will never release code or weights

nickknyc commented 8 months ago

Point 1 is standard advertising fare, and with point 2 the consistency isn't perfect and has some of the qualities of current Stable Diffusion models. This model and code+results is most definitely real. Also people's clearly visibile impatience and insatiable hunger to make porn with this is so absurdly hilarious to me that I am enjoying every day it doesn't come out. Everybody needs to chill out.

In the meantime, https://github.com/showlab/MotionDirector just came out with their code and models as well as trainer.

Too right... thanks for the MotionDirector tip

ShawnFumo commented 8 months ago

Point 1 is standard advertising fare, and with point 2 the consistency isn't perfect and has some of the qualities of current Stable Diffusion models. This model and code+results is most definitely real. Also people's clearly visibile impatience and insatiable hunger to make porn with this is so absurdly hilarious to me that I am enjoying every day it doesn't come out. Everybody needs to chill out. In the meantime, https://github.com/showlab/MotionDirector just came out with their code and models as well as trainer.

thanks that looks like stable video. a little bit worse version

however this AnimateAnyone claimed videos are so another level . i bet they will never release code or weights

I mean, while it is good enough to be shocking at first glance, there is still plenty of problems with it. Like that animation with the lady with the necklace, you can see how the necklace is stuck to her body instead of swinging freely. Even the slower animations have some artifacts if you look closely (often with hands, or weirdness with the eyes) and the faster animations have a ton of artifacts. And all the examples are centered bodies on static backgrounds. That can certainly still be useful in itself, but it isn't clear how easy it'll be to have this work with more advanced animations with movement in the bg, zooming, panning, etc. I'm sure people will figure that out eventually, but might need to resort at first to techniques you'd usually use to integrate real video of people on greenscreens onto virtual sets to integrate diff kinds of animation together into one video.

The paper is pretty in depth and shows how it is built on top of aspects of Stable Diffusion and AnimateDiff, even initializing parts with those weights. At least one person on twitter (who seems to know what they are doing from other posts), mentioned it was detailed enough that they'll reproduce it in code themselves if it isn't released officially.

We don't know how much cherry-picking there is in the sample videos, how well it generalizes to diff chars and motions overall (like I'm guessing it'd blow up if you tried to have someone do a handstand). But I see no reason to think it is "fake" even in the sense of way overpromising. They mention in the conclusion of the paper how there can be artifacts still, how there can be more problems with areas of chars not visible in the original image (notice only a few of the videos show the character's back). They're also coming from a different perspective in research, like they point out how their approach is slower than other non-diffusion methods. So this may seem amazing to us in terms of our SD perspective, but if you look at the two earlier techniques compared against in several of the videos, this seems more incremental quality increases to those, while also being slower.

Also, why would they even bother making a github repo if they didn't intend to release at least the code eventually?

JuvenileLocksmith commented 8 months ago

Point 1 is standard advertising fare, and with point 2 the consistency isn't perfect and has some of the qualities of current Stable Diffusion models. This model and code+results is most definitely real. Also people's clearly visibile impatience and insatiable hunger to make porn with this is so absurdly hilarious to me that I am enjoying every day it doesn't come out. Everybody needs to chill out. In the meantime, https://github.com/showlab/MotionDirector just came out with their code and models as well as trainer.

thanks that looks like stable video. a little bit worse version

however this AnimateAnyone claimed videos are so another level . i bet they will never release code or weights

Point 1 is standard advertising fare, and with point 2 the consistency isn't perfect and has some of the qualities of current Stable Diffusion models. This model and code+results is most definitely real. Also people's clearly visibile impatience and insatiable hunger to make porn with this is so absurdly hilarious to me that I am enjoying every day it doesn't come out. Everybody needs to chill out. In the meantime, https://github.com/showlab/MotionDirector just came out with their code and models as well as trainer.

thanks that looks like stable video. a little bit worse version

however this AnimateAnyone claimed videos are so another level . i bet they will never release code or weights

Why would the it behave any other way assuming that it has been implemented in the apparent manner? Seems plausible that it would work. No?

FurkanGozukara commented 8 months ago

ok let me add something here

recently I made auto installer for another repo just like this one https://github.com/magic-research

and guess what about the demo results haha:)

I bet this will be same

who wants to test i made an auto installer for you : https://github.com/magic-research/magic-animate/issues/44