Did you try to train WaveGAN using small datasets (less than 2 hours of recordings)?
Did you see any relation between the number of epochs needed to obtain a good generation and the dimension of the dataset?
I am trying to understand this training WaveGAN using a dataset of recordings from songbirds.
I was thinking:
if this is true, it might be possible to set a limit for training allowing to stop it automatically after a certain number of steps (depending on the dimension of the dataset and the batch size);
it is possible that there is a lower bound for how much a dataset can be small.
Hi,
Did you try to train WaveGAN using small datasets (less than 2 hours of recordings)? Did you see any relation between the number of epochs needed to obtain a good generation and the dimension of the dataset?
I am trying to understand this training WaveGAN using a dataset of recordings from songbirds.
I was thinking: