haozh7109 / SEGAN-TensorFlow2

Speech Enhancement Generative Adversarial Network (SEGAN), implementation with TensorFlow 2.X
Apache License 2.0
3 stars 0 forks source link

Best performance weights #1

Open robeendra opened 1 month ago

robeendra commented 1 month ago

This code saves the generator and discriminator weights on every epoch during training of the model. Is the weight of last epoch is best? How can we check the performance of the trained model? What should be the loss value of generator and discriminator for best performance ?

haozh7109 commented 1 month ago

Hi, the model obtained from the last epoch does not necessarily mean the best model, considering that the model may be overfitting.

According to my actual experience in GAN training, you can see the convergence of the generator and discriminator through the loss, but more importantly, you need to evaluate the performance of the model using inference on the test data, where you can calculate various metrics for the selection of the optimal model.

robeendra commented 1 month ago

Hello sir, Thank you very much for your response. I am new in machine learning and still learning about GAN from basic. Now I am clear about the evaluation of model. And I have one more query. Is the performance more better using LeakyReLU activation instead of PReLU? The SEGAN original paper uses PReLU in generator part.

Thank you.

On Wed, 31 Jul 2024 at 21:05 Hao Zhao @.***> wrote:

Hi, the model obtained from the last epoch does not necessarily mean the best model, considering that the model may be overfitting.

According to my actual experience in GAN training, you can see the convergence of the generator and discriminator through the loss, but more importantly, you need to evaluate the performance of the model using inference on the test data, where you can calculate various metrics for the selection of the optimal model.

— Reply to this email directly, view it on GitHub https://github.com/haozh7109/SEGAN-TensorFlow2/issues/1#issuecomment-2260776946, or unsubscribe https://github.com/notifications/unsubscribe-auth/AKKOZMJACGKUBG6OCCGE5JDZPD6C3AVCNFSM6AAAAABLYESY3KVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDENRQG43TMOJUGY . You are receiving this because you authored the thread.Message ID: @.***>

haozh7109 commented 1 month ago

I haven't specifically compared Prelu and leaky relu in SEGAN. But maybe parameterized relu gives better results, or introduces instability because it comes from training. You can run enough experiments to compare the results.

robeendra commented 1 month ago

Thank you for your suggestion. Can you suggest any evaluation tool? ITU said PESQ is also outdated. POLQA is licensed. Is there any other option?

On Thu, Aug 1, 2024 at 2:53 PM Hao Zhao @.***> wrote:

I haven't specifically compared Prelu and leaky relu in SEGAN. But maybe parameterized relu gives better results, or introduces instability because it comes from training. You can run enough experiments to compare the results.

— Reply to this email directly, view it on GitHub https://github.com/haozh7109/SEGAN-TensorFlow2/issues/1#issuecomment-2262505867, or unsubscribe https://github.com/notifications/unsubscribe-auth/AKKOZMJHWMPZ3PGCZ7KWM33ZPH3KFAVCNFSM6AAAAABLYESY3KVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDENRSGUYDKOBWG4 . You are receiving this because you authored the thread.Message ID: @.***>