elliottwu / DeepHDR

This is the implementation for Deep High Dynamic Range Imaging with Large Foreground Motions (ECCV'18)
MIT License
186 stars 38 forks source link

How to transfer a LDR image to HDR domain #13

Closed Carrotor116 closed 4 years ago

Carrotor116 commented 4 years ago

I am confuse about how to transfer a LDR image to HDR domain.

In your paper, it was finished by gamma correction, (consistent with Kalantari's paper) Hi = (Ii ** γ)/ti; γ=2.2, where ti is the exposure time of image Ii.

In your code,

def LDR2HDR(img, expo): # input/output -1~1
    return (((img+1)/2.)**GAMMA / expo) *2.-1

and

in_exps_path = os.path.join(scene_dir, 'input_exp.txt')
in_exps = np.array(open(in_exps_path).read().split('\n')[:ns]).astype(np.float32)
in_exps -= in_exps.min()
...
in_HDRs[:,:,c*i:c*(i+1)] = LDR2HDR(img, 2.**in_exps[i])

And the Kalantari's dataset include exposure bias {−2.0, 0.0, +2.0} or {−3.0, 0.0, +3.0}.

My confusion are follow:

1) Dose in_exps -= in_exps.min() mean that ti in formulation is the the exposure time relative to low exposure image ?

2) What is the exposure bias? Why exposure time is equal to 2**exposure_bias ? I search the wiki and find a formulation which may be related, EV = log2( N**2 / t ) where EV is exposure value, N is f-number and t is exposure time. It can transfer to t = (N**2)/(2**EV). While it also can not explain the relationship between exposure time and exposure_bias. Dose the exposure_bias mean the exposure compensation ? If dose, then expousre_time = 2**exposure_compensation ?

elliottwu commented 4 years ago

Yes, they are the same as "exposure compensation", and they are relative. In practice, it won't matter too much what scales the input values have (as long as not too extreme), since the neural network will learn the correct mapping.

I used the lowest exposure as the base for relative EVs, such that in_HDRs stay in range (-1,1). A negative EV will make the value go beyond 1. But it could also make more sense to keep the medium exposure at EV=0, if we assume the medium exposures are more consistent across the dataset and we want to reduce the input variance. I do not remember if I had done any experiments using the original EVs. I suspect they won't make a big difference.

Carrotor116 commented 4 years ago

Thanks, so ti just is a scaling factor about the exposure time of Ii, not need must to be a exact exposure time of image. And you use 2**exposure_bias as a reasonable estimate.

Am I understanding it correctly ? :)

elliottwu commented 4 years ago

For preprocessing the input for training, it doesn't need to be exact values as long as the ground-truth is correct. And oftentimes, we may not have access to the exact camera parameters of an photograph. It can also be useful to allow some level of variation in the input during training, such that the trained model can be more robust across various kinds of test images.

However, when it comes to preparing the correct ground-truth HDR images, we do need to be careful. Luckily Kalantari has carefully done that, so we don't need to worry about that.

Carrotor116 commented 4 years ago

Thanks for your reply :)