TimoBolkart / voca

This codebase demonstrates how to synthesize realistic 3D character animations given an arbitrary speech signal and a static character mesh.
https://voca.is.tue.mpg.de/en
1.15k stars 273 forks source link

Question about make windows in the audio_handler.py #53

Closed wuxx1624 closed 3 years ago

wuxx1624 commented 4 years ago

Hello, Thank you for this great work.

I noticed that in the audio_handler.py, half of the window_size is padded before the sequence and half padded after. Therefore, each window includes window_size/2 frame before the current frame and window_size/2 frame after the current frame.

zero_pad = np.zeros((int(self.audio_window_size / 2), network_output.shape[1]))
network_output = np.concatenate((zero_pad, network_output, zero_pad), axis=0)

I'm trying to re-train the model without gathering the future data with the intended application in real-time animation. Therefore, I modified the padding zero section as follows:

zero_pad = np.zeros((int(self.audio_window_size), network_output.shape[1]))
 network_output = np.concatenate((zero_pad, network_output), axis=0)

However, the result became much worse.

May I know if there is any reason the audio window has to be padded as that? Is there any specific pre-processing for the target vertex that has relationship with the window?

Thanks a lot!

TimoBolkart commented 4 years ago

Given a speech sequence, we split the sequence into overlapping windows of speech features, where each window is centered at the video frame. Therefore, the output vertex offset (i.e. the animation offset for the particular window) is reconstructed from some speech information before and after the actual frame. As a consequence we need to pad the sequence in the beginning and end by half a window size such that reconstruction get a complete feature window as input. Some temporal context is actually important to get a smooth animation while the model effectively predicts results only in a frame-by-frame manner. Does this answer your question?

wuxx1624 commented 4 years ago

@TimoBolkart Thank you for the reply! If I only want to use the information before the actual frame, is it possible to change the output vertex offset to satisfy this purpose?

TimoBolkart commented 4 years ago

I think you don't need to adapt the vertex offset output but the audio input. You could change the data handling such that each window only contains the features before and until the frame rather than considering also parts after the frame. We have never experimented with this and I would expect that the output gets jittery.

wuxx1624 commented 4 years ago

@TimoBolkart Thank you for the suggestion! I still have a few confusion.

  1. For the vertex offset, do you mean the "target_vertices" in the pipeline? It accepts data_verts.npy in the feed_dict.
  2. Did the vertices data in the data_verts.npy be reconstructed only use the actual frame, or also reconstructed with information before and after the actual frame? If the latter, can I get the vertices reconstructed only use the actual frame?
TimoBolkart commented 4 years ago

VOCA outputs the offsets from a static subject-specific template. For changing the input window you don't need to change anything with the vertex output I guess. It only uses information from the audio window which does contain some temporal context before and after the frame. But this information are only in the audio domain. The vertex output has no information about output vertices before or after the frame.