Closed OedoSoldier closed 9 months ago
Did you find it inside models.py
?
Did you find it inside
models.py
?
Yes, I found the l_length
👌
I am not very clear about the l_length from SDP, is it a MSE loss?
I am not very clear about the l_length from SDP, is it a MSE loss?
SDP is flow-based and should not trained with DD.
Did you find it inside
models.py
?Yes, I found the
l_length
👌
What about the l_length for sdp? should this line put outside the else part?
l_length = torch.sum((logw - logw_) ** 2, [1,2]) / torch.sum(x_mask)
I am not very clear about the l_length from SDP, is it a MSE loss?
SDP is flow-based and should not trained with DD.
Yes, but as the paper said, the MSE loss should be one part of the the SDP loss in training. is the flow loss somewhat a kind of MSE loss?
So, in sdp
, we use a normalizing flow to send the discrete frame numbers (durations) to a gaussian. The l_length
in sdp
is a negative log likelihood that should be minimized in order to make sure that the flown-numbers are a sample of gaussian. During inference, we give it a noise and the text, and it gives out the durations (using reverse flow).
So, in
sdp
, we use a normalizing flow to send the discrete frame numbers (durations) to a gaussian. Thel_length
insdp
is a negative log likelihood that should be minimized in order to make sure that the flown-numbers are a sample of gaussian. During inference, we give it a noise and the text, and it gives out the durations (using reverse flow).
Is that means we will get no help when adding a MSE( SDP(text, noise, reverse), d) to the sdp training loss? and the paper mensioned MSE is just for DP but not for SDP?
No, we can do that by sending a gaussian noise and text to sdp, get the durations output, and then mse with the real (MAS) durations. I have not added it in the repo, but it is possible to do it easily.
I think vits2 uses sdp in paper. 'z_d' is the noise for sdp.
I think vits2 uses sdp in paper. 'z_d' is the noise for sdp.
Thanks for the help, I will have a try at convenience, I think evan when we add the MSE to the sdp predicted logw, it will be at the second stage when training SDP with DPD for the last 30K steps.
I think vits2 uses sdp in paper. 'z_d' is the noise for sdp.
Thanks for the help, I will have a try at convenience, I think evan when we add the MSE to the sdp predicted logw, it will be at the second stage when training SDP with DPD for the last 30K steps.
The paper has three losses for the duration discriminator: $L{adv}(D)$, $L{adv}(G)$, and $L{mse}$. But the code only implemented $L{adv}(D)$ and $L_{adv}(G)$:
https://github.com/p0p4k/vits2_pytorch/blob/1f4f3790568180f8dec4419d5cad5d0877b034bb/train_ms.py#L418
https://github.com/p0p4k/vits2_pytorch/blob/1f4f3790568180f8dec4419d5cad5d0877b034bb/train_ms.py#L448
$L_{mse}$ should be:
And add to
loss_gen_all
.https://github.com/p0p4k/vits2_pytorch/blob/1f4f3790568180f8dec4419d5cad5d0877b034bb/train_ms.py#L446