Closed durgob closed 5 years ago
@gabrieleilertsen Thanks for your reply, but I am still a bit confused.
Assuming clip is set as 20%. In normalized case, let's say value of pixels in [0, 0.8] occupy 80% of the number of pixels, then sc = 0.8, and the scaled host_roi in range[0, 1.25] If I am not mistaken.
When Not normalized, let's say the value of pixels is between 0 and 100. And value in [0, 0.8] occupy 80%, then the scaled host_roi in range[0, 125].
So there is a big difference in term of max value(1.25 vs 125), will it impede the training process?
Yes, that is correct. Both images will have the same number of saturated pixels, but different maximum values. The max value can vary quite a lot between images, but this is part of the LDR to HDR problem. Also, using a loss function in the log domain makes easier to handle extremely large values so that they do not influence too much.
Got it! @gabrieleilertsen Thanks a lot!
Hi, Thanks for your impressive job!
I have downloaded some training data with your provided dataset list(Thanks a again, that save me a lot of time), some are .exr files, and the other are .hdr files.
Accidentally I found that the max value of these files are different: some seem like they have been normalized(value in [0, 1]), and some haven't(value in [0, hundreds]).
But there is not normalize for these files in virtualcamera because they all are in 32FC3 format.
So, should I normalize them into [0, 1] before processed by virtualcamera?