Closed chensming closed 10 months ago
Hello, The results reported in the paper and the model checkpoint released are generated by the model trained for 120 epochs, using a batch size of 8. The above results seem to be reported after 50 epochs. Please try to train it longer until 120 epochs. I think the performance on dehazing and denoising would improve with more training. We had observed that batch size of 8 per GPU i.e a total batch size of 32 provided the best result.
Hello, The results reported in the paper and the model checkpoint released are generated by the model trained for 120 epochs, using a batch size of 8. The above results seem to be reported after 50 epochs. Please try to train it longer until 120 epochs. I think the performance on dehazing and denoising would improve with more training. We had observed that batch size of 8 per GPU i.e a total batch size of 32 provided the best result.
oh, thanks. And I also want to know how to get this visualization.
I wonder which of the three prompt generation modules you used to visualize the code?(Since there are 3 prompt components in the model) https://github.com/va1shn9v/PromptIR/blob/20eff7735fd603ba64842bfd282570e70cd46386/net/model.py#L339 https://github.com/va1shn9v/PromptIR/blob/20eff7735fd603ba64842bfd282570e70cd46386/net/model.py#L352 https://github.com/va1shn9v/PromptIR/blob/20eff7735fd603ba64842bfd282570e70cd46386/net/model.py#L364
Hi, I am interested in your work. And I reproduce the experiment. And I got the results as followings:
The first line is the result I reproduced, the second line is the result in the paper. I found the task of going to the rain to be much higher than in the thesis, and none of the other tasks lived up to the thesis. Did I set it up wrong somewhere? I changed the batch size from 8 to 4 because of the cuda memory. And I followed issue 4 to prepare the dataset.