Closed man-sean closed 6 years ago
BINARHO, you don't need to reverse engineer. Read the code... as I did...
When I wrote 'reverse engineering' I meant 'decode' 41 uncommented C++ files. As I'm not regularly program in C++, and I don't familiar with Qt (and that slows me down when I try to understand which function does UI and which implement the algorithm) I was hoping for an outline of the processing chain.
The main issue for me is that I don't understand according to what principle the mask was generated. Or what is the purpose of the response functions in the code.
Short time ago, when I started contributing hdrmerge to make speedups and fix bugs I also didn't know something about the way it works. I read the code and understood... Sorry, you have to read the code to get an idea how it works. A few hints at least: If you want to exclude the gui files just exclude the files which contain an uppercase 'Q'. Most important files are Image.cpp (hpp) and ImageStack.cpp(hpp)
Ingo
Can you clarify for me what fattenMask(mask, featherRadius)
in the compose
function (ImageStack.cpp) does? Obviously, erode the mask in some way - but what is the principle? I couldn't figure it from the GIMP code
@BINARHO : The fattenmask function is indeed a bit difficult to read because of all the conditions inside the loop which are only for making it faster. Take a look at https://github.com/heckflosse/hdrmerge/blob/master/ImageStack.cpp#L337 for better understanding. Just the lines from 337 to 343, not the lines above (SSE code). You'll see that it's quite simple.
Thanks, helped a lot.
I have more general question about the algorithm - I didn't see where the bit-depth expending is taking place, what I mean is that at the merging that I see at the compose
function a wighted average is preformed according to the mask, but if it really just an average then we never go beyond the 12/14 bit per pixel we originally had - an I wrong? If so, this is 'Exposure Fusion' and not 'HDR', no?
@BINARHO The base scale factor (to scale up from e.g. range 0..16383 to range 0..65535) is set here https://github.com/heckflosse/hdrmerge/blob/master/Image.cpp#L56
The raw files get scaled up by this factor, merged and the result gets scaled down again to original range (not bit depth). Downscaling is here https://github.com/heckflosse/hdrmerge/blob/master/ImageStack.cpp#L415
If your input raw has 14 bit the range will be from 0 to 16383. The range will be the same in output dng file but because of floating point you have more intermediate values especially at the first evs. The point is that in your original (integer) raw file you have only 2 possible values for first ev, 2 for second ev, 4 for third ev, 8 for 4th and so on (I hope I'm calculating correctly here). In the final floating point dng file you have 'the same' number of possible values for each ev. After merging the raws (in floating point) the result is scaled down again to the original range (e.g. 0..16383), not to the original bit depth (because we are in floating point space now). Though this downscaling possibly could be avoided by storing scaled white levels in the dng file, it doesn't matter whether you work in the range 0..65535 or in the range 0..16383 when the values are floating point values.
Sorry I'm a bit busy these days, so just a note about this. Thanks to @heckflosse for the explanation. The motivation of HDRMerge is this article: http://www.guillermoluijk.com/article/virtualraw/index_en.htm
@jcelaya Thanks! very interesting article, I'm assuming the actual algorithm behind 'Zero Noise' is not available? (I didn't find it at least).
@heckflosse Thanks a lot, again. Your explanation about the integer V.S. floating-point clarify for me the problem I had in mind with the range we worked at. The question that arise is where do we shift the values of the different exposures? Because, for example, pixel with value 100 in 0EV image represent different photon-count then 100 in -4EV image, and we don't want to replace a blown-up area in the 0EV image with the corresponding patch from the -4EV image without correct its brightness first. Am I wrong? if not, I didn't notice this procedure is done in the code.
@BINARHO That's what https://github.com/heckflosse/hdrmerge/blob/master/Image.hpp#L53 gives you. Also take a look at https://github.com/heckflosse/hdrmerge/blob/master/Image.cpp#L99 which calculates the base for this function. Edit: also take a look here: https://github.com/heckflosse/hdrmerge/blob/master/Image.hpp#L84
Hi, I'm reading a lot of papers about exposure merging and as mentioned in the Readme file, all of the papers I saw refer to developed RAW files, so I'm very curious to know what is the approach you took in HDRmerge that doesn't involve developing the RAW files.
Can you describe the algorithm? Or at least guide me which function responsible to what at the algorithm chain? Because reverse engineering doesn't sound fun for me right now :sweat_smile:
Thanks in advance, and sorry for the unusual request :smile: