jcelaya / hdrmerge

HDR exposure merging
http://jcelaya.github.io/hdrmerge/
Other
355 stars 78 forks source link

Preserve the exposure value across bracketed sets #171

Open Beep6581 opened 5 years ago

Beep6581 commented 5 years ago

When shooting panoramas, one always deals with a high dynamic range scene. Some elements of the scene will be dark, others bright. To handle this, one shoots a bracketed set of photos at every point of view while keeping the camera in manual mode with identical settings, so that stitching the bracketed sets should be simple afterwards. But it's not, because HDRMerge treats every bracketed set as unique, and it somehow balances the exposure of every set in a way which makes the HDR DNGs vary greatly in brightness among each other compared to the source raws.

Here are the source raws, note that each bracketed set is identical in brightness except for natural variation of light in the scene: screenshot_20190225_092912

After converting them to HDR DNGs, the results are unusable: screenshot_20190225_092736 These HDR DNGs should be identical in brightness except for natural light variations in the scene. Eyeballing the exposure compensation in RawTherapee does not lead to a good result.

Ideally, HDRMerge would preserve the exposure value of each set. As a workaround, if HDRMerge could tell me the EV offset of each HDR DNG compared to the source image set, that would at least allow me to accurately dial in the exposure compensation in RawTherapee.

Source raw files can be downloaded from here: https://filebin.net/52zk08b6e3509vag

rdzeldenrust commented 5 years ago

Thanks for mentioning this, I was having the same issue and was hoping there was a workaround - would be good to have one.

A program like HDRgen does do this so it should be possible.

Beep6581 commented 5 years ago

@rdzeldenrust there is a workaround if you use RawTherapee, and that is to save unclipped 32-bit floating-point images, see: https://discuss.pixls.us/t/save-unclipped-images/11621

rdzeldenrust commented 5 years ago

Thanks @Beep6581, I'll give that a try.

rdzeldenrust commented 5 years ago

@Beep6581 trying it at the moment. Is the idea that I take the original RAW images, save them as unclipped 32-bit floating point images, and then use them with HDRMerge? I don't seem to be able to export them as anything other than tiff or jpeg or png, and HDRMerge doesn't take those formats.

Or is the idea that I process my images in HDRMerge and then take the DNGs and unclip them somehow?

Beep6581 commented 5 years ago

@rdzeldenrust no, the process is:

rdzeldenrust commented 5 years ago

Thanks for the workflow! Seems to work as far as I can see.

I'm not doing panoramas - I am doing an HDR timelapse which also requires the exposure to be consistent across frames - but the technique seems to work similarly.

The only thing is that the TIFFs become gigantic but if I convert them into EXRs they should be manageable.

gmaxwell commented 4 years ago

I also ran into this but was leaning on thinking it was RT doing the inconsistent gains since I couldn't find anything in the hdrmerge codebase that could be doing it. (But it's hard to tell because I can't find anything but RT that will read the hdrmerge DNGs). Thanks for the tip about 32-bit/unclipped, I'll give that a try.

While troubleshooting I also encountered another issue which is that HDRmerge computes the relative exposures from the pixel differences, but in my shots sometimes the shortest two exposures are essentially all black (because those frames are pointing at shadows but the exposure has to allow for very bright areas in other frames). The regression that finds the relative exposures ends up just fitting noise (and presumably dead pixels: "v >= nv" should probably be > because the values will be equal for colocated hot/dead, though the sat check probably protects against hot pixels, and L2 norm is really sensitive to extreme outliers) and I've found that the results can be somewhat different from shot to shot. [This, incidentally, could potentially produce exposure inconsistencies that hugin couldn't correct, since the exposure offset will depend on which source image was used.]

In my case I just bypassed computeResponseFunctions and use the shutter times, which appears accurate enough with my camera. It might be useful if hdrmerge had an explicit option to use the camera's claimed exposures or at least regularized the fit using the camera data (and maybe used a robust regression).... or had some kind of pano-batch mode where it computed the exposures over the whole collection of images, assuming that the camera's exposure is consistent from shot to shot (this appears to be true for my camera w/ shutter duration based bracketing but I don't know how true it is in general).

Beep6581 commented 4 years ago

@gmaxwell please open a new issue for that, along with sample images which demonstrate the problem. You could also include a patch for testing which adds a checkbox which bypasses computeResponseFunctions, the checkbox could go into the "Open raw images" frame under "Use custom crop ratio", along with a command-line switch.