v-iashin / SpecVQGAN

Source code for "Taming Visually Guided Sound Generation" (Oral at the BMVC 2021)
https://v-iashin.github.io/SpecVQGAN
MIT License
339 stars 38 forks source link

bending the re/de-constructed melspectrogram to create new sounds. #4

Closed johndpope closed 2 years ago

johndpope commented 2 years ago

https://github.com/ciaua/unagan/issues/8

image

Is it possible? I want to take above visual and mash it around (change the shapes) to create new vocals....

UPDATE basically - i think I want to condition the SpecVQGAN on these images - (not video a video frame per se')

v-iashin commented 2 years ago

Hi there!

Let me try to rephrase the question to make sure I am on the same page. The visual is not a spectrogram, is it? It is an activation matrix (Frequency x time). You would like to condition audio synthesis on this information and, hopefully, get the audio with a similar activation matrix.

Do I understand it correctly?

johndpope commented 2 years ago

The visual is not a spectrogram, is it? - that's right. It is an activation matrix (Frequency x time). Yes. I spat this out from a song using this library by @marl https://github.com/marl/crepe The voice is an amazing instrument - but only has 4 directions -> straight / up / down or zigzag pattern. I want to play with this (using say photoshop) to create new melodies (I don't need understandable lyrics). Do I understand it correctly? yes.

v-iashin commented 2 years ago

Ok, I see. This seems to be quite interesting.

I think I saw something similar before: https://magenta.tensorflow.org/music-vae – it is more like a MIDI player. Maybe this will also be useful: https://sonycslparis.github.io/interactive-spectrogram-inpainting/

Regarding the Spectrogram VQGAN. I don't think this image (activation matrix) is a good choice as an input here because you will need to quantize (encode) it as a sequence of codes that the transformer will be using as a prime. This would require training another VQGAN to reconstruct these activation matrices.

What you can do instead is to assume that for each time step you have only one frequency. Check our the visual, most of the time you have one activation per time. With this, you can simply take the sequence of frequencies and train the transformer to generate audio given this list. Maybe you can also add a class (style: male/female) into this condition to stylize the output.

For this idea you will need:

  1. a dataset with speech segments and pitch/frequency annotations
  2. to train the SpecVQGAN to reconstruct the speech spectrograms
  3. to train the transformer using the encoded frequency annotations (as a list of frequencies) as the conditioning;
  4. build a player which will do what you want: a photoshop kinda tool that will transform the user's input as the list of frequencies that will be plugged in as the input to the transformer.