Open Gomezzzzz opened 5 months ago
I'd second that. I've run some experiments with samplers etc but the results are patchy at best. Plus they need to be extremely OTT expressions to have any real effect.
Something to help match the expressions (and things like make-up etc.) on the original face to the swapped would be great
+1
I have an idea. How about doing partial redrawing under the control of Controlnet after face swapping?
The premise is to have a Controlnet model that can stabilize the face shape, and try to balance the richness of expression and facial similarity by adjusting the weight of this model.
Or, I think this kind of face swapping should essentially support expressions. The main reason for not supporting it may be the lack of consideration in this regard during training. Is it possible to retrain an inswapper model in a community way?
Feature description
ReActor works marvelously, but unfortunately “flattens” the emotions on the faces. For example, “looking frustrated” or “yawning” and so on will give almost neutral facial expressions. ADetailer helps a bit, but the facial expressions still become more neutral. Is there any “facial expression toggler” possible in ReActor? :-) Or would it require creating special models for Reactor, a model for each facial expression, which nobody will do, of course?
I apologize if I wrote obvious nonsense.