Pytorch implementation of HighRes-net, a neural network for multi-frame super-resolution, trained and tested on the European Space Agency’s Kelvin competition. This is a ServiceNow Research project that was started at Element AI.
Why best_score starts from 100? and why shift_cPSNR is reduced from that value?
Why val_score is normalized by size of validation dataset after looping over srs.shape values? What is its effect? Being reduced from 100 to begin with?
Why the final score (after calculation) ends up being negative? What is this metric?
So val_score is calculated using shift_cPSNR.
Some things need explanation:
Thanks