Lornatang / SRGAN-PyTorch

A simple and complete implementation of super-resolution paper.
Apache License 2.0
411 stars 105 forks source link

PSNR falls drastically during adversarial training #43

Closed paragon1234 closed 2 years ago

paragon1234 commented 2 years ago

The PSNR improves during generator training, but drops drastically during adversarial training.

Train Epoch[0045/0046](00010/00015) Loss: 0.007902.
Train Epoch[0045/0046](00015/00015) Loss: 0.006159.
Valid stage: generator Epoch[0045] avg PSNR: 19.95.

Train Epoch[0046/0046](00010/00015) Loss: 0.006377.
Train Epoch[0046/0046](00015/00015) Loss: 0.008251.
Valid stage: generator Epoch[0046] avg PSNR: 19.99.

Train stage: adversarial Epoch[0001/0010](00010/00015) D Loss: 0.139652 G Loss: 0.598175 D(HR): 0.990013 D(SR1)/D(SR2): 0.112813/0.022210.
Train stage: adversarial Epoch[0001/0010](00015/00015) D Loss: 0.002624 G Loss: 0.810450 D(HR): 0.998733 D(SR1)/D(SR2): 0.001354/0.000455.
Valid stage: adversarial Epoch[0001] avg PSNR: 9.15.

Train stage: adversarial Epoch[0002/0010](00010/00015) D Loss: 0.002039 G Loss: 0.589604 D(HR): 0.998040 D(SR1)/D(SR2): 0.000008/0.000007.
Train stage: adversarial Epoch[0002/0010](00015/00015) D Loss: 0.001770 G Loss: 0.579492 D(HR): 0.998254 D(SR1)/D(SR2): 0.000018/0.000017.
Valid stage: adversarial Epoch[0002] avg PSNR: 8.84.

Train stage: adversarial Epoch[0003/0010](00010/00015) D Loss: 0.001410 G Loss: 0.456838 D(HR): 0.999054 D(SR1)/D(SR2): 0.000449/0.000344.
Train stage: adversarial Epoch[0003/0010](00015/00015) D Loss: 0.000123 G Loss: 0.389203 D(HR): 0.999966 D(SR1)/D(SR2): 0.000089/0.000067.
Valid stage: adversarial Epoch[0003] avg PSNR: 8.22.

Train stage: adversarial Epoch[0004/0010](00010/00015) D Loss: 0.023198 G Loss: 0.501722 D(HR): 0.999708 D(SR1)/D(SR2): 0.016052/0.000103.
Train stage: adversarial Epoch[0004/0010](00015/00015) D Loss: 0.006275 G Loss: 0.574956 D(HR): 0.993783 D(SR1)/D(SR2): 0.000000/0.000000.
Valid stage: adversarial Epoch[0004] avg PSNR: 8.21.
Lornatang commented 2 years ago

Try to pull the latest code.

paragon1234 commented 2 years ago

Can you please elaborate what was the problem. from the code changes I can find that pixel loss and gradient clipping is added. Also, can you elaborate on how to figure out that gradient clipping is required?

Lornatang commented 2 years ago

At that time, I adjusted the BS value to be very large for fast training. Loss dropped unstable during training, so I added gradient clipping, which has now been deleted.

paragon1234 commented 2 years ago

Thankx