jcelaya / hdrmerge

HDR exposure merging
http://jcelaya.github.io/hdrmerge/
Other
355 stars 78 forks source link

Bayer drizzle [feature request] #157

Closed noushdr closed 5 years ago

noushdr commented 6 years ago

Hi, Thought this idea would be of interest for HDRMerge community. It seems to be called "Bayer drizzle", and was implemented by Dave Coffin Andrew Fruchter and Richard Hook (seems to be working on DeepSkyStacker). The idea is that, since you have different images from the same scene and they probably have some motion between them, you can (theoretically) get full color information without interpolation, just using motion estimation. Note that this is not the same as "Variable-Pixel Linear Combination". This is not meant to have higher resolution images, but to get full color information on the image. It is, sorry.

I had some issues debayering some, but not all, HDRMerge images with Rawtherapee. AMaZe generates artifacts, as does IGV and others. So I thought this idea would be really nice for this project.

noushdr commented 6 years ago

Found this implementation with permisive license (uses C and Python): https://github.com/spacetelescope/drizzle

Beep6581 commented 6 years ago

It seems they're describing Pixel Shift, which requires that four images are taken using identical settings with the sensor shifted one sensel at a time in an (anti-)clockwise direction. RawTherapee already supports Pixel Shift demosaicing. How would this work with bracketed, raw images as used in HDRMerge, taken on a tripod (no shift) or hand-held (massive shift not pixel shift)?

noushdr commented 6 years ago

Well the idea was: HDRMerge asks for 4 images of each exposition (I usually do 4 exposition on landscapes, so 16 images), then first do the Pixel Shift on all of them and then do the HDR/ZeroNoise. Hand-held would not be possible, indeed, but on tripod you could set 2s between each shot, so you can change the position slightly between them.

RawTherapee already supports Pixel Shift demosaicing.

It does? Never heard of this feature on RT... This only works for Sony and Pentax, right? The idea was to have the befenits of Drizzle, together with HDR/ZeroNoise, on any camera that has Raw output with normal bayer filter (I don't know how X-trans would behave).

noushdr commented 6 years ago

@Beep6581 Do you know if this is even possible to implement? If it is possible, where to start? I don't have good coding skills, but there's probably other ways I could help (research, debuging, etc).

Beep6581 commented 6 years ago

HDRMerge is raw in, raw out. I ask again, how would this work in HDRMerge, and what would the massive amount of work required to implement this achieve?

noushdr commented 6 years ago

Sorry if I wasn't clear. Here's the idea: Drizzle HDR

I do understand this will require a massive amount of work and I don't know if it's possible to write this kind of data on DNG files, but, there're many advantages (assuming the motion estimation is accurate):

I know you have experience with image processing, that's why I'm asking you if this is even possible or if you think there's some faulty on my reasoning.

jcelaya commented 6 years ago

Well, I would use HDRMerge first to get HDR raws of each shifted image, then use Rawtherapee for demosaicing. No further coding needed.

Anyway, how do you get the shifted shots in the first place? Do you have special equipment for that?

Javi

El mié., 25 jul. 2018 a las 23:55, noushdr (notifications@github.com) escribió:

Sorry if I wasn't clear. Here's the idea https://pictshare.net/944n60m72u.png: [image: Drizzle HDR] https://camo.githubusercontent.com/7fd9d3b93a0d104b07997ec5baf336ba74b2eac8/68747470733a2f2f7069637473686172652e6e65742f3934346e36306d3732752e706e67

I do understand this will require a massive amount of work and I don't know if it's possible to write this kind of data on DNG files, but, there're many advantages (assuming the motion estimation is accurate):

  • No interpolation artifacts. Less moire and aliasing. Many kinds of photographs require this, such as astrophotography, historic photographs (e.g. zoological specimens), etc.
  • Real color data. This is necessary for architecture photographers that need to match exactly what interior designer did on the room, for example, or anything that must have exact colors for some reason.
  • Finer details. Some scientific images need to have accurate results. One case scenario is deconvolution on microscopes. It seems luminance regularization https://pdfs.semanticscholar.org/a746/571076e7dbd73c34816374e8c449bcd9c113.pdf can only do so much, so this can result in in errors using deconvolution. This is no big deal for normal photography, but scientific images need to be precise

I know you have experience with image processing, that's why I'm asking you if this is even possible or if you think there's some faulty on my reasoning.

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/jcelaya/hdrmerge/issues/157#issuecomment-407909189, or mute the thread https://github.com/notifications/unsubscribe-auth/AB9t_hOUcIVQAvQWLAOKpWS3WC2Nzrwyks5uKOlKgaJpZM4VZZD0 .

noushdr commented 6 years ago

Well, I would use HDRMerge first to get HDR raws of each shifted image, then use Rawtherapee for demosaicing. No further coding needed.

RT supports merge of different images already, using drizzle? Also, even it does, I suppose the tonemapping will not be as accurate as if it was using full chroma information (?). I'm not against this idea, though.

Anyway, how do you get the shifted shots in the first place? Do you have special equipment for that?

Only a tripod with slightly movements between the shots. Even the shutter mirror movement would make a small shift on the position. After that, "sub-pixel motion estimation" could get the photos aligned as in the diagram above. This motion estimation is the difficult part. This publication have a method, but don't know if this would work without the precision of movements. Video codecs seems to have advanced implementations of motion estimation, such as this and this.

I'm not asking/expecting you guys to simply implement this. This is a conversation and I'm curious if this could some day be implemented as the advantages are very clear.

Beep6581 commented 5 years ago
  1. Pixel Shift is not Drizzle. Pixel Shift requires hardware which is capable of shifting the sensor one photosite at a time in a square pattern, orthogonally to the optical axis. Drizzle (variable-pixel linear reconstruction) is based on dithering, and samples of pixels from the input images "rain down" on a grid (where the name "drizzle" originates), "taking into account shifts and rotations between images and the optical distortion of the camera", "Drizzle works with black and white images, color images, images taken on an equatorial mount that only need shifting between frames, images taken on an alt-az mount that require shifts and rotations, and even images taken on different scopes that require shifts, translations and scales to adequately align."
  2. Drizzle requires undersampling. Are the photos from your camera undersampled?
  3. HDRMerge is "raw in - raw out".
  4. The Pixel Shift and Drizzle algorithms do not output bayer images, they output demosaiced ones. HDRMerge does not output demosaiced images. "Raw in - raw out".
  5. The DNG format can store demosaiced images, but HDRMerge does not support demosaiced images even in the DNG format. "Raw in - raw out".
  6. @heckflosse implemented Pixel Shift demosaicing in RawTherapee. It involves many variables, estimations, and automated algorithms which may change and improve over time. To make use of improvements, the Pixel Shifted images must remain unbaked (not demosaiced). HDRMerge merges exposures while preserving their CFA pattern (it does not demosaic them, except for the embedded preview), allowing one to demosaic them in any capable program now or in the future. If HDRMerge was to drizzle input images, it would be baking the output. It would also be crossing over from the domain of a merging-only program into the domain of a demosaicing program.
  7. As @jcelaya pointed out you can HDRMerge first, demosaic later, but you cannot currently use RawTherapee for that as RT supports Pixel Shift, not Drizzle.
  8. You could also do the opposite. You could drizzle a set of telescope photos for each exposure bracket using the Drizzle software, then merge the drizzled images using any of the many merging programs, such as Luminance HDR, and formats, such as FITS or OpenEXR.

https://spacetelescope.github.io/drizzle/drizzle/user.html http://www.stsci.edu/~fruchter/dither/drizzle.html

noushdr commented 5 years ago

Ok, I get it. "Bayer matrix in - Bayer matrix out". It's not that it can't be done, the idea is just not right for this project.

Thanks for all the replies.