Open skylarkie opened 6 months ago
In our subsequent work, we also encountered this issue You can try incorporating a bias when calculating amplitude and phase:
mag = torch.sqrt(stft_spec.pow(2).sum(-1)+(1e-9))
pha = torch.atan2(stft_spec[:, :, :, 1], stft_spec[:, :, :, 0] + (1e-5))
Thank you very much for fantastic work and code release!
I tried your anti-wrapping loss and other phase-domain loss (e.g.
L1(clean_phase, est_phase)
) on training other networks in a naive way, simply adding it to original losses with a scaling. However, it's very likely to result in that the model parameters ends upnan
. Therefore, I'm wondering if you are used to face similar situation and solve this problem perfectly. If so, could you please share some experiences or tricks?Thank you!