ykdai / Flare7K

Official Implementation of "Flare7K: A Phenomenological Nighttime Flare Removal Dataset"
https://ykdai.github.io/projects/Flare7K
Other
119 stars 16 forks source link

Training and inference issue of Uformer #8

Closed Jilin22 closed 11 months ago

Jilin22 commented 11 months ago

Hello, I'm trying to reproduce the training process of Uformer with the source code provided in this repository. During the training process, the visualization results of the validation are the expected normal images. But when the trained weights are saved and loaded again to perform inference, the results generated by the model are all gray images. By debugging the code, I found that for any input, the final output of the model is a tensor with an average value of 0.5 for each element. What could be the cause of this problem, thank you!

ykdai commented 11 months ago

If the network is not trained, the final output of the model is a tensor with an average value of 0.5. Thus, I am curious if you just output the initialized weights. Is the output normal with our provided checkpoint?

Jilin22 commented 11 months ago

The losses converged as expected and the results of validation during training also appear as normal images. But the outputs of the retrained weights are all gray images, while the provided checkpoint produces normal results.

ykdai commented 11 months ago

How many iterations have you trained? Besides, can you share me your checkpoint? I have no idea just with your description.

ykdai commented 11 months ago

Thank you for sending me the checkpoint. I have checked it. There is no problem for this checkpoint and I can output normal flare-removed results. Are you using the command python test.py --input test_data/real/input --output result/test_real/other_model/ --model_path experiments/net_g_850000.pth for inference ?