Closed wuxx1624 closed 3 years ago
Given a speech sequence, we split the sequence into overlapping windows of speech features, where each window is centered at the video frame. Therefore, the output vertex offset (i.e. the animation offset for the particular window) is reconstructed from some speech information before and after the actual frame. As a consequence we need to pad the sequence in the beginning and end by half a window size such that reconstruction get a complete feature window as input. Some temporal context is actually important to get a smooth animation while the model effectively predicts results only in a frame-by-frame manner. Does this answer your question?
@TimoBolkart Thank you for the reply! If I only want to use the information before the actual frame, is it possible to change the output vertex offset to satisfy this purpose?
I think you don't need to adapt the vertex offset output but the audio input. You could change the data handling such that each window only contains the features before and until the frame rather than considering also parts after the frame. We have never experimented with this and I would expect that the output gets jittery.
@TimoBolkart Thank you for the suggestion! I still have a few confusion.
VOCA outputs the offsets from a static subject-specific template. For changing the input window you don't need to change anything with the vertex output I guess. It only uses information from the audio window which does contain some temporal context before and after the frame. But this information are only in the audio domain. The vertex output has no information about output vertices before or after the frame.
Hello, Thank you for this great work.
I noticed that in the audio_handler.py, half of the window_size is padded before the sequence and half padded after. Therefore, each window includes window_size/2 frame before the current frame and window_size/2 frame after the current frame.
I'm trying to re-train the model without gathering the future data with the intended application in real-time animation. Therefore, I modified the padding zero section as follows:
However, the result became much worse.
May I know if there is any reason the audio window has to be padded as that? Is there any specific pre-processing for the target vertex that has relationship with the window?
Thanks a lot!