Closed sayakpaul closed 3 years ago
The whole pipeline is like below (assume input resolution is 256x256): input RAW data (256x256x1) --[deBayer pre-processing]--> deBayer RAW data (128x128x4) --[PUNET model]--> output RGB image (256x256x3) I guess your confusion is from the deBayer pre-processing. I will update README and the challenge website for clarification.
Now it's totally clear. Thank you!
The training happens with 128x128 (reduced) RAW images while the pre-trained model's input dimensions are different. Why is this mismatch? I understand the pre-trained model's input dimensions have been matched to what's expected in the challenge. But I feel I am missing out on something here.