darktable-org / darktable

darktable is an open source photography workflow application and raw developer
https://www.darktable.org
GNU General Public License v3.0
9.65k stars 1.13k forks source link

META ISSUE : pipeline reordering for 3.6 #8634

Closed aurelienpierre closed 6 months ago

aurelienpierre commented 3 years ago

Please list, gather and synthesize current issues with current default pipeline order (3.0).

So far, we have : 

elstoc commented 3 years ago

Unless we plan to deprecate spot removal this should also be moved before crop/rotate. Not sure I can comment on local contrast/vignetting but I don't have an issue with moving them if we think it will be an improvement.

Nilvus commented 3 years ago
* [ ]  Retouch & Spot removal may be better before crop/rotate ?

As @elstoc suggest, I think spot removal could (and should) be deprecated. They seems good for me after crop/rotate. I prefer crop/rotate before retouch (something on border images could disappear if rotate/crop. So no need to retouch those parts for example. But before could be ok too.

* [ ]  Local contrast may be better before filmic / base curve ?

I have tested both and find for my needs that it's better most of the time after filmic (I don't use base curve) and only sometimes before. So I'm more after.

Nothing to add for other points.

elstoc commented 3 years ago

They seems good for me after crop/rotate. I prefer crop/rotate before retouch (something on border images could disappear if rotate/crop. So no need to retouch those parts for example. But before could be ok too.

If you want to take the source for retouch/spots from outside of the cropped area, then crop/rotate must come after those modules. This is a very common use case for me and I find myself reordering these modules often. The current order means that both source and target shapes must be placed inside the cropped area.

AlicVB commented 3 years ago

agree with @Nilvus : spot can be deprecated. For retouch position, I think it's better before crop&rotate. One usecase :

jenshannoschwalm commented 3 years ago

I am not sure we can just move the crop&rotate to another position in the pipeline with respect to keeping developed images as they are.

  1. any distorting module (liquify, retouch ..) could refer to areas currently outside the crop and suddenly it's inside. Are we sure this is always corrected precisely?
  2. modules might calculate data from available input as sort of reference (defringe?) could it be that those parameters change without notice?

wouldn't this mean to introduce a new module order or at least reposition crop&rotate only for new edits?

elstoc commented 3 years ago

wouldn't this mean to introduce a new module order or at least reposition crop&rotate only for new edits?

That's what I assumed we were discussing, yes. Definitely wouldn't want to automatically reorder the modules for existing edits.

aurelienpierre commented 3 years ago

I have mixed feelings about deprecating spots. The retouch module is far from perfect in terms of results and a lot slower.

I am not sure we can just move the crop&rotate to another position in the pipeline with respect to keeping developed images as they are.

We are discussing here a new default, which will not affect older edits since the pipeline order is written in DB along with editing history.

ralfbrown commented 3 years ago

The cloning tool in the retouch module is functionally equivalent to the spot removal module, and in fact started out as a copy of same (obvious from the code and comments). It should also be essentially the same speed -- it's the heal tool that is slower due to the extra computation in trying to adapt the source region to the destination. IME, "heal" nearly always gives superior results compared to "clone"; the exception is patching large mostly-uniform areas with a dramatically different color at one edge of the destination region than the other (e.g. filling in missing sky on a handheld stitched panorama).

Regarding pipe order, spots/retouch should definitely come before the modules which can crop the image (perspective correction as well as c&r). I would also argue that liquify is better placed after spots/retouch (but still before c&r/perspective). In the current ordering, placing a shape over top of a warp from liquify can lead to odd-appearing shapes, including decidedly non-smooth outlines, and unexpected results for clone/heal.

Nilvus commented 3 years ago

And related to #7473

GustavHaapalahti commented 3 years ago

I have noticed that defringe does not work very well with the 3.0 order when using it on astro photos. The stars that have fringing are blacked out. If I move defringe before/below input color profile, it works well.

rawfiner commented 3 years ago

I agree with @GustavHaapalahti , defringe is better before input color profile. In case of strong chromatic aberration, luminance can become very dark due to some chromatic aberration: correcting them before input color profile avoids this issue.

aurelienpierre commented 3 years ago

Defringe is an Lab module that de-chroma edges in Lab. There is no Lab space before input color profile (aka matching sensor RGB to XYZ), hence no chroma, so that's all bogus. I'm not sure why it behaves better, before, but it surely doesn't do what's advertised.

IME, "heal" nearly always gives superior results compared to "clone"; the exception is patching large mostly-uniform areas with a dramatically different color at one edge of the destination region than the other (e.g. filling in missing sky on a handheld stitched panorama).

Healing gives correct results if you correct destination patches far from edges with source patches far from edges. Any edge nearby the source or the destination messes up the output, especially since it is not possible to rotate the patches to match the edge direction source vs. destination. Also, a quick look at the code shows that it uses an algo tuned for 8 bit uint, which is not what we do here. Anyway, point being : I don't trust that module too much.

Regarding pipe order, spots/retouch should definitely come before the modules which can crop the image (perspective correction as well as c&r).

I'm not so sure. Because of the utter sensitivity to edges of the retouch/spot stamps, if the image is not yet perspective-corrected before going to retouching, that reduces a lot the number of candidate patches to sample (because only the patches with the exact same orientation as the destination will be candidates).

I would also argue that liquify is better placed after spots/retouch (but still before c&r/perspective). In the current ordering, placing a shape over top of a warp from liquify can lead to odd-appearing shapes, including decidedly non-smooth outlines, and unexpected results for clone/heal.

The problem then is that the liquify control doesn't happen in viewport space, which makes coordinates mapping odd in GUI. Also, it does make more sense to resculpt the shapes after the uniform geometric distortions (always edit general -> peculiar, uniform -> local). The true problem here seems to be that masks higher in the stack follow the liquify distortions, and I don't see why. The output of liquify should be treated as a new image, and I see no point in tracking its distortions.

ralfbrown commented 3 years ago

Regarding pipe order, spots/retouch should definitely come before the modules which can crop the image (perspective correction as well as c&r).

I'm not so sure. Because of the utter sensitivity to edges of the retouch/spot stamps, if the image is not yet perspective-corrected before going to retouching, that reduces a lot the number of candidate patches to sample.

The stamps pass through the distort functions of the preceding modules, so you get the same candidates either way, they just look like they have different shapes.

aurelienpierre commented 3 years ago

Ok so that explains a lot of weird behaviours I have seen. Is there even a use case for carrying-on distortions along the pipe like that ? From where I sit, it only creates problems and odd misbehaviours.

elstoc commented 3 years ago

There's some discussion of these sorts of things in #3904 if it helps

jenshannoschwalm commented 3 years ago

s there even a use case for carrying-on distortions along the pipe like that ?

Masks?

rawfiner commented 3 years ago

Defringe is an Lab module that de-chroma edges in Lab. There is no Lab space before input color profile (aka matching sensor RGB to XYZ), hence no chroma, so that's all bogus. I'm not sure why it behaves better, before, but it surely doesn't do what's advertised.

I understand. Defringe itself may indeed behave worse before input color profile than after. Having it before input color profile helps because it improves behavior of input color profile, which behaves badly in presence of chromatic aberration. (in other words: [bad defringe behavior + good input color profile behavior] is visually better than [bad input color profile behavior + good defringe behavior]).

aurelienpierre commented 3 years ago

For masks, you only need to map the coordinates of the mask center through distortions (like an anchor, you record the pixel to which it should be attached), but not the whole mask shape. Since the mask shape is drawn in viewport reference, it should respect the viewport shape.

aurelienpierre commented 3 years ago

Having it before input color profile helps because it improves behavior of input color profile, which behaves badly in presence of chromatic aberration.

Input color profile is a linear map, how could it be subjected to frequential artifacts ?

jenshannoschwalm commented 3 years ago

Remember raster masks. The get transformed pixel-wise.

aurelienpierre commented 3 years ago

Remember raster masks. The get transformed pixel-wise.

Again, for what purpose ?

Distortions happen early in the pipeline. I think, after the last distorting module, we can just set the image geometry in stone and assume the pixel matte is static for the rest of the operations to come. Carrying geometric distortions is only going to create inconsistencies between viewport pixels (GUI) and image pixels (pipeline).

Any raster image processing software only apply distortions on the previous layer. Then everything happens relatively to the viewport. Since we are now mixing raster features with a non-destructive workflow, I think we should rather enforce some discipline on the user-side, rather than trying to account for all possible cases : set some sort of distortion fence, and everything higher in the stack is assumed to get a pixel matte == viewport.

jenshannoschwalm commented 3 years ago

Again, for what purpose ?

A mask used as a raster source can be from any module, whereever it sits in the pipeline.

aurelienpierre commented 3 years ago

But the modules supporting masks all come after the distortion modules.

AlicVB commented 3 years ago

@aurelienpierre : not sure we speak of the same thing, but masks points, etc... are always defined in the initial image space. So we need to be able to transform (and backtransform) coordinates from any point of the pipe to the initial image space. This is needed so we don't enforce the user to edit the image in a defined order. Take this simple example :

=> you want your carefully defined mask to still follow your object !

Same also apply for "on-canvas" drawings...

...again, maybe I've not understood what you was saying;)

aurelienpierre commented 3 years ago

@AlicVB that goal is simply unsustainable with the local distortions we have now. I get it for uniform distortions, like perspective and rotation, but #3904 clearly shows the limits of this design (even for simple lens distortion).

I'm going to say that if the user has to change the orientation of the pic long after he started drawing shapes, he needs to seriously rethink his editing method.

rawfiner commented 3 years ago

Input color profile is a linear map, how could it be subjected to frequential artifacts ?

Actually, the problem is defringe suppose luminance is not erroneous, which is not right in case of chromatic aberrations. But the discontinuity in luminance due to a chromatic aberration can be much higher after input color profile than before.

For example, with a chromatic aberration between a dark tree and the sky, which gives a blue fringe: G: ——_____ B: ————___ (sky on the left, tree on the right, chromatic aberration is visible where blue is erroneously high while green is already low)

After input color profile, green value can become negative if there is a negative coefficient for blue in the input color matrix, even if that coefficient is very small, because in our example blue is very high compared to green. Also, if we consider that luminance is mostly green, before input color profile our green channel is error free (we can assume that one of the 3 channels is a reference and the other 2 are erroneous), while after input color profile the error has spread over all channels. So after input color profile we get in such a case a much stronger error on luminance.

In our example, if we look at luminance before (BL) and after (AL) input color profile we get (I'll use numbers for BL and AL, I don't manage to do it with ASCII art): G: `——____ _B:————___ BL:3322111 AL:3300111`

TurboGit commented 3 years ago

I'm going to say that if the user has to change the orientation of the pic long after he started drawing shapes, he needs to seriously rethink his editing method.

Well sometime (and I did that many times) you edit a picture and at the end you find that it is better with some other orientation. This may be true for standard landscape or portrait to flip horizontally to change the reading, but also true for architecture details mostly graphic things that you can have in different orientation.

So I agree with @AlicVB , the fundamental design of masking in the original space must be kept.

AlicVB commented 3 years ago

he needs to seriously rethink his editing method.

not really if you speak about "creative" distortions...

and anyway, going back to a static order in term of user interaction is not really an option imo, even if it's great on the paper, I guess that I'm not the only one who need to reajust few things sometimes (either forgotten stuff, or wrongly set, or not enough precise)

aurelienpierre commented 3 years ago

So I agree with @AlicVB , the fundamental design of masking in the original space must be kept.

The original design assumed only non-destructive modules. Retouch and liquify modules are clearly raster modules intrinsically (swap and move pixels at the pixel level). In fact, dt is probably the only non-destructive editor to have them. That's bending its purpose a lot. In any raster editor, when you change low layers, you void the higher layers, that's how it is. So there are decisions that need to be dealt with early, which is why a workflow is more than messing around options in a random order.

Right now, I don't care what should/should not be kept. We have several problems of shifting between GUI drawn masks and actual pixel masks, the most severe of them happening with lens correction (which has been there forever and is reasonably simple because it's uniform, so it doesn't even have the excuses liquify has). Less severe issues are with the retouch clone/heal sampling, because when you sample areas that look alike in viewport, in the pixel pipe they might actually be distorted and different, so there is no way to know what you actually sample.

In both instances, the GUI feedback is completely misleading and there is no way to guess what's really going on in the pipeline.

Thus, right now, for the sake of allowing sometimes hypothetic second thoughts on the first stages of the pipeline, we are degrading core features 100% of the time in ways that cannot be fixed. Consistency between what is drawn and what appears on screen is not optional here. Interactions happen in viewport space, GUI feedback needs to honor that.

Besides, redrawing vector masks is really not that bad. I do it a lot when copy-pasting edits, it's better than having masks that unpredictably shift. Drag and drop and done. Compared to that, trying to predict the pixel shift between viewport and pipeline is… well, good luck.

elstoc commented 3 years ago

If we're talking about continuing to draw the points of the mask (the nodes of a curve, the centre of a circle etc.) in the co-ordinates of the original image, but drawing the actual shape onto the distorted image (the viewport) that seems fine to me, since the points will still be placed on the same image features if you change the distortion subsequently, but the interpretation of the shape between those points will be much more predictable and will match what it looked like when the user originally placed the shape.

aurelienpierre commented 3 years ago

image

Like that ? Isn't it already what's supposed to happen ?

parafin commented 3 years ago

You can't save mask in coordinates other than original picture, because basically any module (certainly liquify) can be disabled (e.g. temporary). Having masks bound to output of some module is simply impossible (well, will lead to very strange results which will make this feature unusable).

aurelienpierre commented 3 years ago

Having masks bound to output of some module is simply impossible (well, will lead to very strange results which will make this feature unusable).

You can bound masks to pixel matte coordinates ± zoom and translation, then let the distorting modules populate the matte as they see fit. That defines a fence between the distorting modules and the masking modules, which isolates them entirely and allow to always assume matte reference == viewport reference. Since mask feathering is available, drawn mask can follow objects in a sloppy way without consequences, so micro-distortions of liquify should not be a problem.

elstoc commented 3 years ago

Isn't it already what's supposed to happen?

I dunno but it's not what actually happens (again see #3904). It draws the entire mask on the base image and then transforms that entire mask through the pipe afaik (so that it's drawn "correctly" on the base image but ends up distorted when used)

aurelienpierre commented 3 years ago

So it's even worse than what I thought. Does anybody know what actually happens in there and if the issue is not merely some maths mistake in the transforms (like a backtransform used in place of the direct transform) ?

parafin commented 3 years ago

Having masks bound to output of some module is simply impossible (well, will lead to very strange results which will make this feature unusable).

You can bound masks to pixel matte coordinates ± zoom and translation, then let the distorting modules populate the matte as they see fit. That defines a fence between the distorting modules and the masking modules, which isolates them entirely and allow to always assume matte reference == viewport reference. Since mask feathering is available, drawn mask can follow objects in a sloppy way without consequences, so micro-distortions of liquify should not be a problem.

What exactly will this solve? Being able to draw straight lines after some tricky distortion module? If it's just a UI problem, then there's probably no need for backend changes.

aurelienpierre commented 3 years ago

What exactly will this solve? Being able to draw straight lines after some tricky distortion module? If it's just a UI problem, then there's probably no need for backend changes.

Being able to draw exactly on one pixel from the GUI to the pipeline, without having to guess pixel shift between both references.

elstoc commented 3 years ago

As an example, you apply lens correction, which fixes a curved horizon, but you want to then apply a straight gradient mask to that horizon. Perversely, though you've fixed the horizon, now it's the mask that's curved, because the mask started off straight (w.r.t. the original image) and has been made curved by the lens correction. This was the primary use case for being able to curve the gradient mask iirc.

aurelienpierre commented 3 years ago

As an example, you apply lens correction, which fixes a curved horizon, but you want to then apply a straight gradient mask to that horizon. Perversely, though you've fixed the horizon, now it's the mask that's curved, because the mask started off straight (w.r.t. the original image) and has been made curved by the lens correction. This was the primary use case for being able to curve the gradient mask iirc.

So that means there is no backtransform implemented between drawing and saving mask nodes. Drawing happens on the RAW image that is not visible in viewport. But I guess that backtransform would require the mask to be rasterized in viewport space before running, which makes it hardly suited to save vector nodes in RAW space.

Aka ditch RAW coordinates.

elstoc commented 3 years ago

Aka ditch RAW coordinates

And we're back into enforcing workflow and rework when you change your mind, which I think was how it used to be. Personally I'd rather live with the current limitations than that because I do a lot of changing my mind about distortions later in my workflow.

Again, though, it's not the nodes that are the problem but that the rest of the shape is drawn on the raw. Is there not a way that we could store co-ordinates and calculate shapes at different points in the pipe?

ralfbrown commented 3 years ago

Shapes are drawn by plotting their control nodes and a moderately large number (up to several hundred for circle/ellipse, up to 100 or so per control node for path and brush) of border positions for both the shape proper and the outer edge of the falloff region in RAW space and transforming them through the currently-applied module distortions. The resulting border points are then connected to draw the shape in the GUI, and the shape is filled in between the transformed points to form the stamp that is used to actually apply the shape.

AlicVB commented 3 years ago

there IS backtransform (and transform) implemented for each "distort" iop (maybe some are wrong or accurate enough, but that's another story) The problem about gradient line and so on is different : it's due to gradient saved as a "line" == 2 points at images extremities (or one point and an angle, don't remember) on the initial image reference. So you have a curved line on screen...

Now we can change that (iirc, there's something like that for graduated density) : it's just a matter of saving the right list of points in order to be sure to have a right line on screen :

But the limitation is that any change in a distort module after that may provoke unexpected results (simple example : you disable lens correction)

TurboGit commented 3 years ago

And don't forget that a mask can be reused in any module at any level in the pipe. Changing the current way of working is just asking for trouble and will probably leads to more issues that there is at the moment.

And yes I suppose the current issues are math errors or rounding issues. We had some that were fixed and today we have something quite good.

AlicVB commented 3 years ago

In fact, I think that I don't really face this problems because I mainly use paths masks with a lot points, so the result is exactly what I want on screen, whatever the distortion in the image pipe. So for paths (with lot of points) or drawn masks, I guess the problem is not really here.

So the problem remains "only" for "predefined" shapes (circles, ellipses and gradient) gradient is really a problem, but for the other... maybe we should just say somewhere that "sometimes circles can appear distorted. If you need very accurate results, better use paths to be more precise"

aurelienpierre commented 3 years ago

And we're back into enforcing workflow and rework when you change your mind, which I think was how it used to be. Personally I'd rather live with the current limitations than that because I do a lot of changing my mind about distortions later in my workflow.

Just because modules can be reordered now still doesn't mean that the pipeline can and should be an oktoberfest. Operations still have a legitimate order dictated by technics, and a lot of silly abuses dictated by a false feeling of freedom. People have learned, in Gimp or Photoshop, that they are not allowed to change their mind about early layers, they know how to work with this limitation.

I'm against keeping up with rigid limitations if it goes in the way of proper behaviour. Something has to give. There is no point in drawing if you can't reach the pixel you aimed at. By the way, how often do you use distortion AND drawn masks at the same time ?

Personally, I use lens correction 100% of times, drawn masks between 30 to 50%, but rarely more than 3-4 shapes and I draw them loosely since mask feathering deals with edges for me. Fixing perspective and orientation is the first thing I do after filmic and exposure (used without no masking), I crop/reframe less than 2% of times. I use liquify occasionally to fix the shape of squeezed body fat, but honestly the magnitude of the changes is usually too small for a feathered mask to notice.

Again, though, it's not the nodes that are the problem but that the rest of the shape is drawn on the raw. Is there not a way that we could store co-ordinates and calculate shapes at different points in the pipe?

But I guess that backtransform would require the mask to be rasterized in viewport space before running, which makes it hardly suited to save vector nodes in RAW space.

aurelienpierre commented 3 years ago

And don't forget that a mask can be reused in any module at any level in the pipe. Changing the current way of working is just asking for trouble and will probably leads to more issues that there is at the moment.

We don't care. The last module doing distortion is retouch, the first module doing masking is exposure, that comes right after. Masking modules all see the same geometry.

elstoc commented 3 years ago

By the way, how often do you use distortion AND drawn masks at the same time ?

I always use lens correction so every time I use drawn masks I also use distortion, and I use masks quite a lot. My "travel camera" has some quite big lens distortion too. I also do a lot of crop/reframing because I'm a rubbish photographer. I use keystone more and more recently, since I'm photographing more buildings than I used to and tilt/shift lenses are way out of my price range.

Again though, I'm not talking about changing module order I'm talking about adjusting rotation, orientation, keystone etc. after I've done retouch and tone correction, for which I often still use drawn masks. If you're saying that a slight change to rotation or keystone adjustment will require me to redo my masks, I'm not really ok with that. If not, then I've misunderstood somewhere.

aurelienpierre commented 3 years ago

Quick test just now : trying to clone a dandelion on the grass with a fine brush.

With lens correction on : the GUI shapes are at the right place, but I duplicate grass instead. Screenshot_20210411_203015

With lens correction disabled : the GUI shapes are misplaced, but I get the dandelion (though inaccurately masked) Screenshot_20210411_203034

This is not more acceptable to me than having to move some masks after having some cropping second thoughts. I mean, how am I supposed to spot the dandelion ?

aurelienpierre commented 3 years ago

there IS backtransform (and transform) implemented for each "distort" iop (maybe some are wrong or accurate enough, but that's another story)

I know there is, my point is it doesn't run at drawing time to convert viewport coordinates to RAW coordinates. Otherwise, the straight gradient would be saved as a bended one, and would be rendered both in viewport and in pipeline, as a straight line. But then, of course, you would have to save a lot more than 2 bounds, which falls back to rasterizing the mask in viewport space.

parafin commented 3 years ago

There is a bug apparently, doesn't mean original concept is flawed.