darktable-org / darktable

darktable is an open source photography workflow application and raw developer
https://www.darktable.org
GNU General Public License v3.0
9.91k stars 1.15k forks source link

automatic perspective correction not ok when preview downsampling is enabled #5225

Closed MStraeten closed 4 years ago

MStraeten commented 4 years ago

If preview downsampling (https://github.com/darktable-org/darktable/pull/4702) is used the automatic detection gives different results in dependence of the downsampling factor. Just 1/1 gives useable results.

To Reproduce process a mage that really needs perspective correction (for minimal perspective distortion it's not that noticeable)

  1. set downsampling to 'original'
  2. apply automatic perspective correction (eg.vertical correction)
  3. set downsampling to '1/4'
  4. apply automatic perspective correction (eg.vertical correction) --> parameters differs (especially noticeable for rotation)

Platform (please complete the following information):

Additional context

GrahamByrnes commented 4 years ago

This doesn't surprise me so much: automatic correction implies finding edges, which will not be so easy if the preview is down-sampled. So I'd suggest that in this case you'd be better off setting preview_downsampling to 1 (and restart... I'm working on that).

MStraeten commented 4 years ago

the identified edges seems to be the same or at lease similar (same colorchecker shot as in #5226) the rotation is about 8 degrees instead of 0.07 (i removed the outer edges so just the edges of the checker are used) (sample: https://mega.nz/file/nCAXzI4Y#IikcBEwmKgJgkYgl0CDGES88qwa9h9cOlSaL6jwF1zE)

TurboGit commented 4 years ago

@GrahamByrnes : I can reproduce. The identified edges are ok but the cropped area and grids are completely off.

GrahamByrnes commented 4 years ago

Oh... I see. this is a different effect, yes.

Just the frame being shrunk by the down-sampling ratio. That should be fixable.

MStraeten commented 4 years ago

here two very simple shots to demonstrate: 20200530-IMG_8649.CR2 & .xmp https://mega.nz/file/mX5A1aJQ#WC0_NW1AOYN-TQhlB__JbvFFDhEqpRxSyUnLqQ9Zy9g https://mega.nz/file/uboE2QKA#Iwm9SmJtdtAdvirc_zsTkyiE9BJXCWy8eP7MWMSoQwg 20200530-IMG_8653.CR2 & .xmp https://mega.nz/file/mKp0BQ5Y#cAM0Ytp1SQf1WW8rxMaoDOYttpLIvfHFnGwcUhX1rqQ https://mega.nz/file/iXxCVCiS#UXevHnXKSW2a4OmFXCWReb8IgiqRZY1jAgV8XBW5VRg

in the history stack you can see the result for downcaling=original below the exposure entry and for 1/4 as latest operation. automatic perspective correction is done just for vertical structure to simplify. The vertical lines were selected manually to have a similar base for calculating the parameters. The more correction needs to be applied the more deviating results.

GrahamByrnes commented 4 years ago

It's obvious that something very strange is happening here. What befuddles me is how this could be generated by down sampling... the automatic crop made sense, since in fact only part of the image was being taken into account.

Here it seems less obvious, since it appears possible to select lines from anywhere in the image (is this correct?). Then when they are passed to ashift, it comes up with a crazy rotated solution.

If it was still an issue of only part of the image being considered, selecting one strong line on the right and another on the left should, to my mind, result in an image where one or other of those lines is made fully vertical and the other ignored. In fact the whole thing is twisted 30° or so to the left. So @MStraeten do you have any hypothesis on what is happening here?

GrahamByrnes commented 4 years ago

There is definitely a difference in the data ashift is working with. I just fed it a photo of a bookshelf. With full scale, 575 lines, 165 iterates, result is excellent. With 1/4: 137 lines, 124 iterates, result is poor.

Looking at the display of lines found, the long lines are still there, but the many detailed lines picking up small features are not recognised. Under those circumstances, the algorithm seems to pick up one edge as most important and rotates that to be vertical.

At this point, I'm thinking the only fix would be to tune the fitting parameters in eg ashift_lsd.c to accept lower quality lines when the total image resolution is lower.

MStraeten commented 4 years ago

It looks like there are parameters taken from preview and full image to calculate the transformations. So the error scales with the downsampling factor

GrahamByrnes commented 4 years ago

Yes, if I save my bookcase as a 1.5kx1k tiff (ie 1/4 size), then re-open it in dt with downsampling = 1, the algorithm snaps it directly to the right correction. There are about half the number of lines being used...

I don't think it's an error-bound: when downsample = 1/4, simplex() spends a lot of time around -1.4°, whereas at 1/1 it jumps pretty quickly to -4.2°.

Using the shift-click, ctrl-click options on the "get structure" button throw up a lot more lines but don't change the outcome. Also, simplex() is fed homogeneous coordinates (scaled to be in the square [0,1]x[0,1] if I'm not mistaken.

So it would seem something is going wrong in preparing the data for simplex(), after creating the catalogue of lines.

MStraeten commented 4 years ago

fix confirmed, thanks for the solution