lucidrains / naturalspeech2-pytorch

Implementation of Natural Speech 2, Zero-shot Speech and Singing Synthesizer, in Pytorch
MIT License
1.26k stars 100 forks source link

Some issues about implementing DurationPitchPredictor #14

Closed JiaYK closed 1 year ago

JiaYK commented 1 year ago

In the paper on NaturalSpeed2, it was mentioned that the Duration/Pitch Predictor has been preliminarily implemented in the code. Here are three small issues:

  1. In the paper, it is stated that there is one attention for every three convolutional layers. I see that every convolution in the code has an attention. Is it being written or intentionally set?

  2. The paper uses Layer Normalization, and the code uses RMSNorm. Is the effect better?

  3. The paper did not mention the residual of the convolutional part, and the code did not use residual connections. May I ask if the 30 layers are too deep, resulting in no gradient?

Thank you very much for your contribution!

lucidrains commented 1 year ago

@JiaYK hey yes, i am aware of 1 and will get that done today!

for 2, RMSNorm has been used successfully in a number of large language models by now (alphacode, llama). I think we can safely start using it

for 3, i was actually not sure about that, but will be adding residuals (it will be redone as resnet blocks)

lucidrains commented 1 year ago

@JiaYK ok, 1 should be done, will probably get 3 done today too

lucidrains commented 1 year ago

@JiaYK ok, 3 should be done too! let me know what you think

JiaYK commented 1 year ago

@lucidrains Thank you very much for your reply! Your efficiency is really high, you released a new version so quickly. Regarding the implementation of the third point, I see that you use ResnetBlock, each ResnetBlock contains 2 convolutions and SiLU (it seems to be generally better than Relu), does this mean that each ResnetBlock is equal to 2 convolutions? I'm not very sure~ Maybe it really needs running experiments to determine the effect

lucidrains commented 1 year ago

@JiaYK oh true, yea, let's make that a hyperparameter

JiaYK commented 1 year ago

There is another small issue. When reproducing the NS2 paper, I used Torchinfo to count the parameter count. Each module follows the settings written in the paper (default configuration for those not written) and will never match.

For example, in the Phoneme Encoder, I changed the FeedForward in the code to the Convolutional FeedForward mentioned in the paper. When I set the Conv1D Filter Size of two 1D convolutions to 2048 and Kernel Size to 9, the parameter quantity is 119M. Only when one Kernel Size is 9 and the other Kernel Size is 1 can the difference be not much, which is 69M. It's not equivalent to 72M in the paper

Or Audio Codec, I initialized it as described in the paper

from audiolm_pytorch import SoundStream, EncodecWrapper
SoundStream(rq_num_quantizers=16, 
codebook_size=1024, 
codebook_dim=256,
strides=(2, 4, 5, 5))

the parameter quantity is 35M, only by modifying channels and use_local_attn can only be changed to the 27M mentioned in the paper. How did you solve the problem of mismatched parameter quantities? Or ignore it? Is it acceptable if the order of magnitude is correct?

Looking forward to your recovery very much!

lucidrains commented 1 year ago

@JiaYK yup, just the general ballpark is ok

from my point of view, the field is an approximate science rather than delicate engineering. hit on a few main ideas and it should be fine

lucidrains commented 1 year ago

@JiaYK although i guess it is still sensitive to certain decision in the neural network. not very apt description

let's just say... very different than traditional engineering

JiaYK commented 1 year ago

Thank you very much for your reply~ Then I will not worry about the specific parameters, and give priority to realizing all the functions!