Closed Beep6581 closed 9 years ago
I got quite good results with similar settings when Pat David first published his article,
but preferred the fine detail level around 2.0.
Fixed presets (as in a profile) are risky as the settings vary from photo to photo
(depending mostly on how closely the skin is captured), but I plan to make a portrait
preset with this included.
Reported by gyurko.david@e-arc.hu
on 2013-12-14 12:57:37
Yes, it's risky business using sharpening in shipped PP3s. I plan on soon making a series
of screencasts to show how to use RT on real life shots and this is something I will
definitely include.
Reported by entertheyoni
on 2013-12-14 13:37:04
We'll soon be able to draw elements in the preview thanks to the pipette patch (and
when i say "soon"... ;) ), so it'll be possible to create healing brush (à la LR),
red eye correction and maybe a dedicated tool to celanup skins :), that would be preferable
over a contrast tool hijacked to cleanup skin. Of course, I may handle the GUI part
(but the mechanism should make it quite easy), but someone would still need to code
the correction function itself.
Note that this would still not be local editing, since it won't be as feature rich
than DarkTable for example, i.e. reusing the same tool multiple times, with fusion
methods.
Reported by natureh.510
on 2013-12-15 05:08:23
Wow Hombre! Awesome!
However even with that, my request for 5 level still stands. Healing brushes cannot
do what this technique does, unless you use the healing brush on one of these decomposed
levels, in which case 5 is still better :]
Why 5? It just seems to work best. I reached that conclusion years ago when using Gimp's
wavelet decompose on portraits, and over the years found various other people on the
net, such as Pat David, who reached the same conclusion.
Reported by entertheyoni
on 2013-12-15 14:18:49
Hombre, I can make the changes to the function.
Ingo
Reported by heckflosse@i-weyrich.de
on 2013-12-15 14:31:12
Great! :)
Re #4: I was thinking about a dedicated "Plastify skin" function :). Creating 5 sliders
instead of 4 shouldn't harm, excepted that it'll break the compatibility. But doing
that now may be the right moment ;)
Reported by natureh.510
on 2013-12-15 14:56:52
To not break compatibility one could insert intermediates so would need 7 levels (I
remember 7 levels when this pyramid was first implemented)
0 acts as now
0.5
1 as now ....
Reported by iliasgiarimis
on 2013-12-15 18:20:20
Or, as I was going to suggest a week ago but forgot, we could replace all sliders with
a flat curve? Target what you want with surgical precision.
Reported by entertheyoni
on 2013-12-15 18:30:23
Maybe Emil could enlighten us, but IIRC each new slider involve a new "wavelength image"
(i.e. one image per slider), so at least a curve isn't possible, and using 7 slider
would increase the memory consumption (don't know how much though) too. Furthermore,
would it still be suitable for skin retouching?
Reported by natureh.510
on 2013-12-15 21:47:06
It would, but I don't think 7 is practical for anything, or at least I haven't found
a use for so many. Sharpening is best done using USM and RL. CBDL is great at mitigating
certain frequencies, which is why it's so good for skin. 7 sliders would require moving
2 extra sliders for which there is no need. Because we always advertised CBDL as being
a sharpening tool, but for a long time now (since you made the USM threshold tool)
USM and RL are better at that, I don't think CBDL is popular, so I wouldn't worry about
breaking backwards compatibility. Better to advertise it as a tool for fixing skin
(and whatever other purposes wavelet decomposition is good for - I haven't found any
but I'd love to hear if you did!) and to make that efficient in the future.
In any case, for now, I just ask for a dirty patch that uses 5 levels so we can test,
and only then decide what we wan't to do.
Reported by entertheyoni
on 2013-12-15 21:59:34
I regularly use Topaz Detail for skin retouching within Photoshop. The internal method
is essentially the same using frequency decomposition. In Topaz Detail there are 3
detail levels, controlled via 6 sliders, so 2 sliders per each level - correction and
boost.
Similar effect can be achieved in RT using CBDL. However, when used properly, one needs
to apply this effect locally to different areas of the skin / image to preserve overall
fine appearance. Since we don't have local editing, this requires 2 conversions and
manual blending.
Here is the illustration of skin smoothing:
http://i.imgur.com/hzgA7ug.jpg
Reported by michaelezra000
on 2013-12-15 22:34:49
Re #10: I'm using the CBDL tool like a Tone Mapping tool, to enhance the contrast in
dull shots, like the Local Contrast slider in the HR tool. In fact, i didn't knew that
it was advertised as a Sharpening tool :)
Reported by natureh.510
on 2013-12-15 23:49:46
You must have some ad-blocking software installed ;]
Reported by entertheyoni
on 2013-12-16 00:26:29
Here is another illustration of skin texturing:
http://www.michaelezra.com/Projects/RT/images_shared/RT_Skin_02.jpg
Reported by michaelezra000
on 2013-12-16 02:23:37
I also use it for Local contrast, and I would like to have a more "coarsest" slider
(16 pix radius) :)
Reported by iliasgiarimis
on 2013-12-16 02:29:34
Here is a patch, including the patch of the core function by Ingo.
It also contains invasive cleanup of blank chars (space, tabs) at end of line, e.g.
in improcfun.cc. If you're fine with that, I may activate this cleanup (it's an option
in Eclipse) for more files, or should we discuss that elsewhere?
Another thing: do you plan to request a 6th level one day? :) If so, I could create
a global compiler option. If not, then fine, I also have other things to code.
Reported by natureh.510
on 2013-12-18 03:38:04
PatchSubmitted
Oh, and I didn't tested the backward compatibility thing, as I rarely use this tool,
and am inclined to self confidence :].
Reported by natureh.510
on 2013-12-18 03:39:13
Hombre, the blank cleanup is better to introduce when all large patches are in. May
be after tagging to 4.1?
Reported by michaelezra000
on 2013-12-18 05:22:09
Re #18: I agree, but there should be a coordination between devs to find a point where
it will be possible. Otherwise, there's a pretty simple method to do that even if big
patch are still under development. But we'll certainly discuss that in an other issue.
For sure, it can wait after 4.1!
Reported by natureh.510
on 2013-12-18 10:34:15
Hombre, which is the new scaling ?.
The old was 1,2,4,8 as I remember and DrSlony if I understand him correctly needs one
more level at 6 i.e. 1,2,4,6,8
I hope someone will compile for win32 to give me a chance for testing.
Reported by iliasgiarimis
on 2013-12-18 11:52:25
I think gaaned has received the notification for the "Waiting for a new build" thread,
I'll certainly provide something but you may ask him by PM.
I don't know about the new scaling, that was Ingo's part of the work.
I've removed the blank char cleanup, the patch is now half the size.
Reported by natureh.510
on 2013-12-18 12:57:01
Actually the levels are {1,2,4,8,16}. But it's no problem to change this to other values.
Ingo
Reported by heckflosse@i-weyrich.de
on 2013-12-18 13:07:31
Hombre, sorry for the trouble with the blanks.
It was me, who sent a patch with trailing blanks stripped to Hombre. So I'm guilty!
Ingo
Reported by heckflosse@i-weyrich.de
on 2013-12-18 13:13:59
re #9: Memory consumption increases by width*height*4 bytes for each level we add.
Reported by heckflosse@i-weyrich.de
on 2013-12-18 13:32:40
Hombre, when I change the values in Contrast by Details, RT writes this into the pp3:
[Directional Pyramid Equalizer]
Enabled=true
Mult0=2
Mult1=1.75
Mult2=1.5
Mult3=1.25
Mult4=1.1000000000000001
Threshold=true
and correctly complains that it can't interpret the value of Threshold at next start
of RT.
Ingo
Reported by heckflosse@i-weyrich.de
on 2013-12-18 14:02:41
Found a way to save width*height*4 bytes, so the new 5-level-version has the same memory-footprint
as the old 4-level-version. Will make a patch as soon as Hombre corrected the pp3-threshold-bug.
Ingo
Reported by heckflosse@i-weyrich.de
on 2013-12-18 15:13:01
Re #27: Ooops, it would have been too simple :( Here is the fix.
Re #23: Does it have to be a power of 2? I'm asking because this tool will produce
a very different result when used on an 12MPix image or a 36MPix image, provided that
the scene will be the same but with a different sampling rate. Shouldn't those coef
depend on the image size, with the finest coef still operating at single pixel level?
Or is it already the case?
Reported by natureh.510
on 2013-12-18 19:40:58
Re #29: Hombre, thanks for the patch. Will use it as the base for #28.
Concerning the values for the levels: They've been there in old version even for the
higher unused levels and I let them as they've been before. If we change one of the
first four values {1,2,4,8} we break compatibility with the old pp3-files and I don't
know a solution to avoid that in this case (I missed this point in #23).
Ingo
Reported by heckflosse@i-weyrich.de
on 2013-12-18 20:21:46
Re #29: Values for the levels: It would be possible to use different values dependent
on the image size or whatever (e.g. {1,3,9,27,81}) if we had a new control to reflect
and change this setting. This way we wouldn't break compatibility with old pp3-files
(just set the base to 2) but would allow to choose different coefs base on image size
or manually.
Ingo
Reported by heckflosse@i-weyrich.de
on 2013-12-18 20:40:33
Re #31: Like a "Scale" slider? It would also let the user expand or contract the "Coarsest"
wavelength (and all the wavelength in between, the finest being fixed at 1 pixel),
depending on e.g. the size of the faces on the image.
If yes, the coef should depend on:
a. the longuest side of the image
b. the total number of pixel
When loading an old pp3, the Scale value would be set to -1., which would then trigger
a method to compute back this value depending on the image size. A scale range of [0.02;
3] may be a good start, but i don't know how to decide what is the best default values
(1. scale)!?
Anyway, this scale enhancement can be done after the 4.1 release.
@Ingo
May I commit my patch now, since it answer to the request of the initial post (does
it DrSLony?) ? Your enhancement could be done on a second commit of this issue? I can't
wait if it that gives you more work, no problem.
Reported by natureh.510
on 2013-12-18 21:08:14
No, please don't commit. Have a look at PM first!
Reported by heckflosse@i-weyrich.de
on 2013-12-18 21:31:01
Ooops, line 1621 of procparams.cc should be changed to:
if (ppVersion < 316) {
Reported by natureh.510
on 2013-12-18 21:43:01
Now you can commit your patch ;-)
Reported by heckflosse@i-weyrich.de
on 2013-12-18 22:04:13
Done, thanks :).
DrSlony, I'll let you close this issue if you want us to open a new one for what we
discussed in comment 29 to 32.
Reported by natureh.510
on 2013-12-18 22:19:53
DrSlony, if you wait half an hour, I can also commit #28. 32-Bit-users would be thankful,
I think.
Ingo
Reported by heckflosse@i-weyrich.de
on 2013-12-18 22:34:41
Committed #28 to default.
Reported by heckflosse@i-weyrich.de
on 2013-12-18 22:48:12
Thank you for working on this!
@37 No rush :)
Artifacts when moving the L curve's black point any tiny amount away from 0,0: http://i.imgur.com/micuSjo.jpg
Reported by entertheyoni
on 2013-12-18 23:00:38
Re #29: I'm looking, but I think the error is somewhere else and only amplified by Contrast
by Details. Just checked with cf0910c613c4, one before this Issue was committed first.
Same Artifacts. So at least it's not caused by this Issue.
Reported by heckflosse@i-weyrich.de
on 2013-12-18 23:24:02
Typo: Should have been Re #39 in last post
Reported by heckflosse@i-weyrich.de
on 2013-12-18 23:24:38
With 4.0.11.198 (gaaned'd build) artifacts at #39 are only on edit-panel's 1:1 display.
The exported tiff has no artifacts with or without resampling.
Reported by iliasgiarimis
on 2013-12-19 00:51:15
And it doesn't happen on all images. Maybe only UniWB images are affected (to be confirmed).
Reported by natureh.510
on 2013-12-19 00:58:47
Thanks Ilias, good hint. That leads the search for the origin of this bug to a different
direction...
Reported by heckflosse@i-weyrich.de
on 2013-12-19 01:07:00
Correction: this problem only occured on the linked image only so far here.
Reported by natureh.510
on 2013-12-19 01:08:44
I am curious, why is (int) being used in dirpyr_equalizer.cc
E.g. line 192:
dst[i][j] = CLIP((int)( buffer[i][j] ));
Reported by michaelezra000
on 2013-12-19 03:49:03
Re #46: because nobody changed it :)
Re #39: As I feared, the error with this picture also occurs with Tonemapping and RL-Sharpening.
So it's definitive a different Issue.
Reported by heckflosse@i-weyrich.de
on 2013-12-19 11:35:54
I think it's an old issue in fact (cannot find it now). There the same artifacts occurred
with strong settings for local contrast tools (edges, tonemapping, CBDL coarsest, sharpening)
on already defect (NaN) pixels which defection propagated in the neighborhood depending
on the tool and it's setting's intensity. The new larger level in CBDL made it more
visible.
But all the above are invisible without moving L curve's black point !!. So maybe one
can do something there too ..
Is -noffastmath used in current compilations ?.
Reported by iliasgiarimis
on 2013-12-19 13:22:52
Re #32: In the current implementation, for each level the value of (scale * 2) has to
be an integer to avoid unexpected behavior caused by rounding. This limits the possible
values of the scalebase if we calculate them by scale[level] = pow(scalebase, level).
In this case only scalebase = 2 with resulting scale={1,2,4,8,16} and scalebase = 3
with scale={1,3,9,27,81} would make sense, maybe even scalebase = 4....
But we could allow setting the values per level as for example {1,2,4,6,8} or anything
else where scale[0] = 1 and scale[level+1] > scale[level].
Re #48: -ffast-math is not enabled at least at my private builds. And it's also not
enabled at any optimization-level by default. So I think, we can exclude it as a potential
reason. But as you said, maybe we should have a look at the L curve.
Ingo
Reported by heckflosse@i-weyrich.de
on 2013-12-19 13:44:29
Any idea why there are no artifacts at the exported tiff ?
Something with LCMs when fed with display data ?.
Reported by iliasgiarimis
on 2013-12-19 15:09:57
Re #48: I placed a check for NaN pixels at the beginning of contrast by detail (before
processing starts). There are NaN pixels in 100% preview, but not when exported to
TIFF. But don't know where's the origin of these NaN pixels. Maybe a curve-specialist
could have a look?
Ingo
Reported by heckflosse@i-weyrich.de
on 2013-12-19 17:08:53
Ingo, DrSlony opened a new issue
http://code.google.com/p/rawtherapee/issues/detail?id=2144
Reported by iliasgiarimis
on 2013-12-19 18:01:02
Originally reported on Google Code with ID 2124
Reported by
entertheyoni
on 2013-12-13 19:02:07