pkhungurn / talking-head-anime-2-demo

Demo programs for the Talking Head Anime from a Single Image 2: More Expressive project.
http://pkhungurn.github.io/talking-head-anime-2/
MIT License
1.13k stars 155 forks source link

I have a question. #35

Open hw1204 opened 2 years ago

hw1204 commented 2 years ago

Hello, I am a Korean university student who is interested in your project. I'm analyzing the code because your project is so impressive. I want to make sure that I understood it correctly, so I'm leaving a message.

I'm trying to make various facial expressions, but I'm asking because there's no change. If I make the code like this, is the flow right?

# happy
def make_happy(self):
    selected_morph_index = 1      # eye_happy_wink
    param_group = self.param_groups[selected_morph_index]   

    param_range = param_group.get_range()
    pose = [0.0 for i in range(poser.get_num_parameters())]

    pose[14] = param_range[0] + (param_range[1] - param_range[0]) * self.alpha
    pose[15] = param_range[0] + (param_range[1] - param_range[0]) * self.alpha

    self.save_img('happy')

Thank you.

dragonmeteor commented 2 years ago

I don't quite understand what you are trying to do.

However, you might want to take a look at the Poser interface (https://github.com/pkhungurn/talking-head-anime-2-demo/blob/main/tha2/poser/poser.py#L129), which encapsulates the neural networks in a way that it can be used to pose a character. The important method is the "pose" method, which takes an image, a pose, and an optional output index. The method should then output an image, which is the input image with the right modification.

You should identify in the code base where the pose method is called. Then, try to make sure that the right information gets to this method. That's basically the general advise that I can give.

You sample code does not use an instance of the Poser class, so I cannot see how it can generate an image.