fthaler / rawhdr

A simple HDR image merger that converts multiple RAW files into a single HDR image.
https://fthaler.github.io/rawhdr
Other
30 stars 1 forks source link

contact request #1

Open Bra1nsen opened 2 years ago

Bra1nsen commented 2 years ago

hey fthaler thanks alot for sharing your work!

I try to estimate solar irradiance with groundbased sky imaging.

SKYCAM  - Kopie

multiple ldr sky images

Clouds or sky recordings have the greatest dynamic range of all. I was thinking about merging ~ 15 images. There will also be heavily overexposed and heavily underexposed images - is there a kind of weight function, with which such images are sorted out directly? Furthermore, I wanted to ask how exactly your / on which mathematics your merging works?

Many greetings Paul

Bra1nsen commented 2 years ago
error2

is that normal?

fthaler commented 2 years ago
error2

is that normal?

Definitely not, see my answer on issue #3.

fthaler commented 2 years ago

The HDR merging algorithm of rawhdr is relatively simple: it tries to find parts of the image which are neither under-exposed nor clipped (currently using a simple heuristic). Then it compares the average pixel intensity of these pixels, leading to an estimate for the relative exposure compared to the second image (for which it does exactly the same). Then it merges the two images such that it takes the well-exposed regions of both and drops over- and under-exposed regions as much as possible.

The first image passed to rawhdr is taken as the reference image, that is, the exposure of the other images is scaled to match the first one.

This is a quite simple and physically accurate procedure, but note that it only works for images with a linear color-space (that is, RAW or linear high-dynamic-range images). Compared to other methods, it does not rely on (often not perfectly accurate) camera EXIF information. To estimate the absolute solar irradiance, you probably need to incorporate this information – like ISO, aperture size, exposure time – of the reference image.

In your case, the algorithm could fail if you have no region which is not totally overexposed, as rawhdr can not detect the correct exposure then. But it might be relatively straightforward to either filter these images out before feeding them to rawhdr, or alternatively to filter them out inside rawhdr. So if you run into such an issue, I could try to add this filtering directly in rawhdr.

Bra1nsen commented 2 years ago

Nice to hear from you. I'm still working on this project.

The dynamic range of the sky is gigantic, but with exposure time series fusion it becomes possible to capture it.

What did you create this project for, what was your goal?

By the way, my name is Paul.

Bra1nsen commented 2 years ago

The HDR merging algorithm of rawhdr is relatively simple: it tries to find parts of the image which are neither under-exposed nor clipped (currently using a simple heuristic). Then it compares the average pixel intensity of these pixels, leading to an estimate for the relative exposure compared to the second image (for which it does exactly the same). Then it merges the two images such that it takes the well-exposed regions of both and drops over- and under-exposed regions as much as possible.

that sounds great!

Bra1nsen commented 2 years ago

The first image passed to rawhdr is taken as the reference image, that is, the exposure of the other images is scaled to match the first one.

Why not an average reference image. The Average of all images together.

hdr = [raw_1, raw_2, raw_3, raw_4] #np.array(of all images)
hdr = sum(hdr).view(np.uint16)

#Normalize according to exposure profile
equals_exp = img_as_ubyte(exposure.rescale_intensity(hdr))
iio.imwrite('hdr.tga', equals_exp)
Bra1nsen commented 2 years ago

To estimate the absolute solar irradiance, you probably need to incorporate this information – like ISO, aperture size, exposure time – of the reference image.

basically I need a function to determine:

N - number of images e_I -optimal exposure time setting for every image taken

Goal: optimal Point for:

minimal Time/ Computational Effort<--> HDR <--> maximal Solar Range

Bra1nsen commented 2 years ago

In your case, the algorithm could fail if you have no region which is not totally overexposed

Could you please explain that further? What kind of filter and why?

fthaler commented 2 years ago

Nice to hear from you. I'm still working on this project.

The dynamic range of the sky is gigantic, but with exposure time series fusion it becomes possible to capture it.

What did you create this project for, what was your goal?

By the way, my name is Paul.

Hi Paul :)

I use it mainly for classical HDR photography and was not happy with other available open source HDR merging solutions. All algorithms I found/tried generated banding artifacts in smooth areas or introduced strange color shifts. There was no software available which uses the fact that RAW images are using a linear color space and are thus quite easy to merge physically meaningful. So I created my own…

Cheers, Felix

fthaler commented 2 years ago

The first image passed to rawhdr is taken as the reference image, that is, the exposure of the other images is scaled to match the first one.

Why not an average reference image. The Average of all images together.

hdr = [raw_1, raw_2, raw_3, raw_4] #np.array(of all images)
hdr = sum(hdr).view(np.uint16)

#Normalize according to exposure profile
equals_exp = img_as_ubyte(exposure.rescale_intensity(hdr))
iio.imwrite('hdr.tga', equals_exp)

My camera can be configured to first expose the first image in an exposure range as normal and then take all under- and overexposed images. Thus, when merging the exposure stack, it’s convenient to use the first image’s exposure as reference. Other approaches are of course possible.

fthaler commented 2 years ago

In your case, the algorithm could fail if you have no region which is not totally overexposed

Could you please explain that further? What kind of filter and why?

For example, if all pixels are overexposed (e.g. pure white), the given algorithm has no chance to correctly estimate the image’s relative exposure and there is no value in using it at all. So it should be filtered out before passing to rawhdr.

Bra1nsen commented 1 year ago

There was no software available which uses the fact that RAW images are using a linear color space and are thus quite easy to merge physically meaningful. So I created my own…

Cheers, Felix

Thats exactly what Iam looking for, really grateful for your work. I plotted today the raw camera response function. Indeed pretty linear:

crf_imx477

Bra1nsen commented 1 year ago

My camera can be configured to first expose the first image in an exposure range as normal and then take all under- and overexposed images. Thus, when merging the exposure stack, it’s convenient to use the first image’s exposure as reference. Other approaches are of course possible.

I guess I will just choose the array with the most average histogram as reference image :)

Bra1nsen commented 1 year ago

Basically your code operates on a weight exposure function, is that correct?

image

fthaler commented 1 year ago

Not sure, what the definition of an exposure weight function is. My algorithm does scale all pixel intensities of an image uniformly, independent of their value. It just uses an exposure-based weight function for merging two (or more) images.