Beep6581 / RawTherapee

A powerful cross-platform raw photo processing program
https://rawtherapee.com
GNU General Public License v3.0
2.78k stars 314 forks source link

bugfixes and color accuracy changes for branch_3.0 #404

Closed Beep6581 closed 9 years ago

Beep6581 commented 9 years ago

Originally reported on Google Code with ID 414

Purpose of code changes on this branch:
- correct gamma curve constants for sRGB (in rtengine/curves.h)
- improve code readability by using more brackets in several cases
- change sRGB matrices to Bruce Lindbloom's standards 
  to improve on color accuracy (see http://www.brucelindbloom.com/)
- remove duplicate lookup tables with xcache and zcache.
- trim lookup tables to actual range used.
- fix hsv band selection (value 8092 should be 8192)
- improved highlight correction with better color preservation and
  logarithmic roll off to better match human visual perception.
- improve accuracy of RGB transformations by using floats
- use constant chroma scale to ensure Lab values are accurate.
- fix clipped raw values below rgb max, this fixes color casts in
  highlights that are supposed to be white.
- fix situation of rotated tiff files causing a crash

When reviewing my code changes, please focus on:

- color accuracy and tonal differences when using highlight compression
- please check stability, the changes should not affect stability.
- the use of floats can decrease speed slightly
- HSV equalizer should be a bit more accurate now.
- it may be my imagination but it seems i see reduction
  in noise levels overall
- 100% highlight compression + large exposure compensation now creates
  (i.m.h.o.) a very pleasing HDR effect.

To improve accuracy further i have some interesting things still waiting here like
dithering for 24bit color and full float processing in RGB.

Please feel free to comment on this patch and try it out in combination with branch_3.0
Even though i think the results i get with this patch are really good, there may be
many oversights which i have not been aware of.

Reported by janrinze on 2010-12-16 13:33:55


Beep6581 commented 9 years ago
Great cleanup Jan, looks good to me (though I could not find difference in noise level
my test pictures). To not let DEFAULT be a hell to merge later, here is your patch
modified for DEFAULT (what was applicable).

Reported by oduis@hotmail.com on 2010-12-16 15:55:17


Beep6581 commented 9 years ago
Wow, thanks for this excellent contribution, Jan!
Color accuracy is one of the most important fields IMHO and I'm really happy to see
such a progression here.

How do you measure color accuracy?
Perceptual control is very important (that's the nature of photographs... we LOOK at
them :) ), but is there any toolchain/environment what you used?

I hope I can test this patch very soon.

Reported by gyurko.david@e-arc.hu on 2010-12-16 15:56:12

Beep6581 commented 9 years ago
I really really appreciate this strive for best image quality possible. Keep up the
good work!

Reported by torger@ludd.ltu.se on 2010-12-17 11:58:29

Beep6581 commented 9 years ago
thanks!
I am working on getting it even better :-)

Reported by janrinze on 2010-12-17 14:32:47

Beep6581 commented 9 years ago
attached an experimental new patch which does all of the described stuff and adds:
- floating point rgb workings
- ordered dithering for 24 bit output to avoid banding issues
- correctly adapt for exposure when using highlight reconstruction methods
  (my previous method did not do that)
- slightly increase strength of highlight compression tool.

Reported by janrinze on 2010-12-17 22:08:55


Beep6581 commented 9 years ago
Is dithering turned off for 16-bit output?  Can it be turned off for 8-bit output if
the user desires (for instance, no use adding to the noise when the noise is sufficient
to dither)?

Reported by ejm.60657 on 2010-12-17 22:29:29

Beep6581 commented 9 years ago
These implementations of luminance based tonemapping seem flawed to me:

+            index = sqrt((r*r+g*g+b*b)/3.0);
+            tonefactor=flinterpol(my_tonecurve,index);

+           index = CLIP(sqrt((r*r+g*g+b*b)/3.0));
+           tonefactor=(index>0.0)?flinterpol(shtonecurve,index)/index:0.0

This is not the way color spaces work.  One should not think of RGB as an orthogonal
3d vector space, and so the 'radius' defined by 'index' above is not the luminance.
 Rather luminance is a *linear* combination of R,G,B for instance in YCrCb color space
the luminance Y is defined as

-           int Y = (int)(0.299*r + 0.587*g + 0.114*b);

As it used to be in the code.  Basing the tonal multipliers on the rms of R,G,B is
colorimetrically incorrect.

Reported by ejm.60657 on 2010-12-17 22:38:42

Beep6581 commented 9 years ago
In this patch dithering is on for 8-bit output.
The dithering will not be noticable when noise is sufficient, it is a 1 step difference
that it causes and if noise is visible then that will be much more than 1 step.
The dithering makes areas with very low gradients look tremendously better and removes
banding. Specifically when used in combination with noise reduction it produces very
pleasing results (i.m.h.o.)

If the dithering would be introduced to the repo it surely would need options for turning
on/off and setting it up. There are a lot of choices that can be made in respect to
dithering. I have used a 16x16 ordered dither matrix but other options might be more
convenient. For example for great output to 8bit tiff it would be preferable to have
random dithering which will not lead to patterned dithering.

So consider this patch an overview of what i am working on.

Reported by janrinze on 2010-12-17 22:45:18

Beep6581 commented 9 years ago
What dithering method is used?  Floyd-Steinberg, or another?

Reported by ejm.60657 on 2010-12-17 22:52:29

Beep6581 commented 9 years ago
The implemented dithering is Bayer ordered dithering. It is a matrix with a full distribution
of all intermediate steps with maximal entropy (according to what i read about it.)

Floyd-Steinberg would have been great but it is hard to do it in parallel.
Using random dithering would yield the best results but is also a bit slower.
On sufficiently fast machines it can be equally well implemented with 'rand()' but
the results would be only noticable at > 100% magnification.

Reported by janrinze on 2010-12-17 23:08:45

Beep6581 commented 9 years ago
Really exciting stuff, must check this out soon.

Quick comment though, last time I looked into random generation when fast speed and
good pseudo random quality was required tausworthe random was the stuff to do, looks
something like this:

/*
 * Generates numbers between 0x0 - 0xFFFFFFFF
 */
static inline uint32_t
tausrand(uint32_t state[3])
{
#define TAUSWORTHE(s,a,b,c,d) ((s & c) << d) ^ (((s <<a) ^ s) >> b)  

  state[0] = TAUSWORTHE(state[0], 13, 19, (uint32_t)4294967294U, 12);
  state[1] = TAUSWORTHE(state[1], 2, 25, (uint32_t)4294967288U, 4);
  state[2] = TAUSWORTHE(state[2], 3, 11, (uint32_t)4294967280U, 17);

  return (state[0] ^ state[1] ^ state[2]);
}

static void
tausinit(uint32_t state[3],
         uint32_t seed)
{
  /* default seed is 1 */
  if (seed == 0) {
      seed = 1; 
  }

#define LCG(n) ((69069 * n) & 0xFFFFFFFFU)

  state[0] = LCG(seed);
  state[1] = LCG(state[0]);
  state[2] = LCG(state[1]);

  /* "warm it up" */
  tausrand(state);
  tausrand(state);
  tausrand(state);
  tausrand(state);
  tausrand(state);
  tausrand(state);
}                   

It was much faster than rand()... but things may have changed since I looked into it
last (~9 years ago).

Reported by torger@ludd.ltu.se on 2010-12-18 00:18:54

Beep6581 commented 9 years ago
I would love to do some testing of this patch as soon as I will be able to compile the
default branch (still having troubles after the wincludes changes).

Reported by michaelezra000 on 2010-12-18 05:32:46

Beep6581 commented 9 years ago
Hello, is this patch already pushed into branch_3.0? This morning I noticed that when
setting Saturation to -100 in the LAB curves section, the image is not desaturated
as before (yesterday) but still has a magenta cast. To make a b&w image, one needs
Sat=-100 in the Exposure section now. I preferred the first method. 

7049bbc01933+ 722+ branch_3.0, Ubuntu 10.10, 32-bit. 

Reported by paul.matthijsse4 on 2010-12-18 11:20:02

Beep6581 commented 9 years ago
I've tried out the patch branch30_4.patch now on this problematic photo:

http://torger.dyndns.org/rt-bugs/IMG_5015.CR2

When default pyramid noise reduction is applied (lum 10, chro 10, gamma 2.0) there
is still banding in the sky (note: sky is dark, calibrated monitor required to see
the problem, watching the monitor at an angle can also work, if it is a LCD-TFT). Other
raw converters I've tested keep some noise (through dithering or otherwise, don't know)
so that banding does not occur.

So in this particular case it did not help reduce banding.

Reported by torger@ludd.ltu.se on 2010-12-18 17:13:10

Beep6581 commented 9 years ago
@Paul: No, none of this has been pushed to 3.0beta as far as I can see.

Reported by ejm.60657 on 2010-12-18 17:13:31

Beep6581 commented 9 years ago
@Torger: if you zoom in at 800% can you see a dither pattern? if not then perhaps you
did not correctly apply the patch?

Your file has been one of my test files actually to see if the results of dithering
would help.

Reported by janrinze on 2010-12-19 13:17:59

Beep6581 commented 9 years ago
Comparison before/after patch can be viewed here:
http://www.timelessme.com/temp/postings/RT_Issue-414_01.jpg
Highight recovery is even more powerful now.

There is a problem with color casts and Lab. 
Emil pointed to issue with Lab in Comment 7.

Reported by michaelezra000 on 2010-12-19 15:17:46

Beep6581 commented 9 years ago
There seems to be some improvement, look here:

http://torger.dyndns.org/rt-bugs/IMG_5015-nodither.jpg
http://torger.dyndns.org/rt-bugs/IMG_5015-dither.jpg

The first is without dither, the other with dither. Banding happens when you have large
fields with a single color transitioning into another large field of single color.
I have used gimp selection tool to select a single color field and filled that with
red/green/blue so it is easy to see the fields with a single color.

Not a 100% high quality comparison, I mistakingly saved as jpg instead of png, but
the distortion is not too bad (the blockiness of the borders of the colorized fields
in the nodither jpg is not a compression artifact, it is that blocky). The nodither
result is also from an considerably older RT. Both have been rendered with pyramid
noise reduction set to default parameters.

Anyway, it does seem like the dither makes some difference, looking at 100% the single-color-fields
have been broken up with some dither and the borders are not as blocky as the nodither
jpg. However, since there are those large single-color fields banding persists. However,
I guess that is a problem (or feature) with the pyramid denosier, in combination with
luminance resolution limitations of the camera.

Getting large single color fields in a photographic image => banding, so you just don't
want that. My guess of what happens is that the 12 bit input from the camera is have
quite clear limitations in the dark colors (few luminance steps), so when noise is
removed it easily happens that large single color fields form, even if the software
work in 16 bit or floating point. That is, I think the large single color fields are
there before bit reduction and dither is applied.

The easy way out is to state that this is a feature of the denoiser and problem is
camera limitations. However, perhaps there is some possibility to detect when large
single color fields form and then do something smart.

Reported by torger@ludd.ltu.se on 2010-12-19 15:24:51

Beep6581 commented 9 years ago
To be clearer, it seems like the dithering does work and do provide improved image quality
of 8 bit output, but the banding problem cannot be solved by dithering alone.

The banding problem is as far as I can understand a separate discussion of how the
denoise algorithm should work and how (if at all) it should relate to bit resolution
limitations in the cameras.

Reported by torger@ludd.ltu.se on 2010-12-19 15:40:52

Beep6581 commented 9 years ago
If you take your banding image and output it as a 16-bit file, then take it into photoshop,
is the output banding?  In other words, is the 16-bit output of the NR tool showing
banding, or only its rendering as 8-bit in RT?  The 16-bit output shouldn't show banding;
if it does there is a flaw in the NR tool, if it doesn't then it just means that Jan's
dithering implementation is suboptimal.

Reported by ejm.60657 on 2010-12-19 16:22:48

Beep6581 commented 9 years ago
Yes that is a test I would like to perform, however I am a Linux user and Gimp is limited
to 8 bit (and the known 16 bit alternative cinepaint is broken currently), so I don't
have a 16 bit paint program... I'll see if I can come up with something.

Reported by torger@ludd.ltu.se on 2010-12-19 16:29:41

Beep6581 commented 9 years ago
You have digikam and krita.

Reported by entertheyoni on 2010-12-19 19:44:23

Beep6581 commented 9 years ago
What NR settings are you using?

Reported by ejm.60657 on 2010-12-19 22:06:30

Beep6581 commented 9 years ago
I wrote a simple program that does the following: it reads the raw samples and remembers
each unique color. Then it renders a new image where each unique color is replaced
with a 8 bit color going through the scale r++ g++ b++. The 16 bit file has many unqiue
colors, meaning that you'll see horizontal lines in the artifical picture, since new
colors from the palette is fetched all the time, and the image is scanned up and down
left to right.

Denoise settings are default, lum 10 chro 10 gam 2.

Here's the unique color 16 bit image:
http://torger.dyndns.org/rt-bugs/IMG_5015-16bit-unique.png

The unique color map of 16 bit output truncated to 8 bit (each pixel value divided
by /256):
http://torger.dyndns.org/rt-bugs/IMG_5015-8bit-unique-truncated.png

And here's unique color map of RawTherapee 8 bit output with dithering patch applied:
http://torger.dyndns.org/rt-bugs/IMG_5015-8bit-unique-dithered.png

A bit unfortunate I started palette counting at 0,0,0 so the sky is still a bit dark,
but you can see the obvious:

 - There's no large single color fields in the 16 bit output
 - There's large single color fields when truncating 16 bit color output
 - The dithering makes little difference.

We probably need more dithering noise or something...

Reported by torger@ludd.ltu.se on 2010-12-19 23:19:04

Beep6581 commented 9 years ago
I guess the problem is that in the sky the noise amplitude in the 16 bit image is less
than one 8 bit step, so even if dithering is applied in the transition zones between
two 8 bit steps banding will still occur.

I think it will be near impossible to get banding-free output if there are large single
color fields in the 8 bit output, even if there are wide dithered zones between the
steps -- in other words all sky must have some little noise left.

If that should be the task for the bit reduction engine (add in noise there if necessary)
or giving some special 8-bit-output option to the denoiser so it leaves more noise
I don't know.

Reported by torger@ludd.ltu.se on 2010-12-19 23:28:56

Beep6581 commented 9 years ago
@torger: dithering will not change the fact that there are still only 8 bits per channel.
So if you would count unique colors in 8 bit per channel there is likely not be be
much change. The dither value is the same for R,G and B and perhaps a noise based dither
that will be done separately on R,G and B channels might give improved results.
Will look into that.

Reported by janrinze on 2010-12-19 23:40:47

Beep6581 commented 9 years ago
In this sort of situation, what one wants is a random dither that gives a probability

prob = (pixelvalue & 255)/256

to round down to the next lower 8-bit value, and (1-prob) to round up.  This will ensure
that the average value in a neighborhood will reflect the underlying average 16-bit
value rather than posterizing all to a single value.  In other words averaging over
a larger area the average value is preserved to less than an 8-bit value, whereas without
this sort of dither the average value is consistently rounded either down or up resulting
in posterization where the average value runs through a half-step between 8-bit values.

Reported by ejm.60657 on 2010-12-20 00:04:09

Beep6581 commented 9 years ago
For those who want to know what dithering does: http://en.wikipedia.org/wiki/Dither
Quantization that causes banding is combined with either a pattern or a algorithm to
reduce the banding. Usually this is done by adding a patterned offset or noise to the
pixel vales before quantization where the value added lies within the range 0..1 of
the base of the quantization.
This will make values round upwards if the offset is sufficiently large.
Since adding of the offset is done in the 16 bit mode ratios of up rounded pixel values
are linear to the error of their quantized counterparts.
16 bit supersampling of such a dithered image will show the intermediate values but
also will show that dithering has limited powers. The number of intermediate steps
will then equal the amount of pixels supersampled. To get full 16 bit back you would
at least need to super sample at 16x16 (256 pixels)

Reported by janrinze on 2010-12-20 00:48:13

Beep6581 commented 9 years ago
I bet if one made an array of random numbers of some reasonable size eg 64x64 one could
use those to randomly pick whether to round up or round down according to

1) pixel location mod 64 in both directions
2) pixel value mod 256 on 16-bit scale as a probability weight (ie fraction of an 8-bit
value).  

In other words, if the fractional 8-bit value is larger than the random number, round
up, if smaller than the random number, round down.   The use of a fixed array of random
numbers speeds the dithering if there is a large time penalty to generating random
numbers (dunno the facts about that).

Reported by ejm.60657 on 2010-12-20 01:51:27

Beep6581 commented 9 years ago
I've actually coded dithtering myself (for audio) back in the days when I did audio
convolution software, so I'm quite familiar with the concept. However, I think audio
dither is much more straightforward since it is one-dimensional, and it is well documented
what methods that work the best.

In this case I guess we need to test a few different approaches...

Reported by torger@ludd.ltu.se on 2010-12-20 06:35:42

Beep6581 commented 9 years ago
When thinking about it, it seems to me that there is something wrong with the dithering
algorithm when the 16 bit image has no single color fields but the 8 bit dithered has.

Shouldn't the added value be in the range -1 .. +1 (or rather -0.5 - +1.5 to make mid-tread
quantization)? This is the case with audio dither.

If the dither range is just 0 .. 1 there will be single color fields.

Reported by torger@ludd.ltu.se on 2010-12-20 07:38:37

Beep6581 commented 9 years ago
Here's the optimal dithering algorithm ;-)

 1. Make preliminary image - add 0.5 (of quantization step) and truncate to 8 bit.
    Adding 0.5 before quantization makes sure that we get rounded values rather than
    just truncated.
 2. Start over with the original to make final image
 3. For each pixel - add 0.5, and test by looking in the preliminary image how many
    neighbors with the same color (look at RGB in total) it has (i e test how large

    single color field it is).
 4. Based on threshold of single color field size (say 20 pixels or so) add dither

    to -1.0 .. +1.0 (in total -0.5 .. +1.5 with the previously added 0.5). If the
    pixel does not pass the threshold, don't add any dither.
 5. Truncate to 8 bit.

This way dither will only be added where needed, thus avoiding introducing more noise
than necessary. Probably many images don't need any dither at all. Actually, you don't
need to make an actual preliminary image or count the actual single color field size.
You can truncate the original data as you go and count pixels only up to the threshold.

Reported by torger@ludd.ltu.se on 2010-12-20 08:08:59

Beep6581 commented 9 years ago
...I think I shall test some dithering implementations, will be traveling now though
so I'll see. I would like to test the "selective" dithering algorithm comparing it
to just applying dither overall. After thinking about it for a while, I think that
applying dither overall [-0.5 .. +1.5 on all pixels and then truncating] is probably
best afterall (I don't think it will worsen the apparent noise in areas where "not
needed", it will just change the already available noise slightly), and it is surely
easier to make a parallel high throughput implementation of that.

Reported by torger@ludd.ltu.se on 2010-12-20 09:01:43

Beep6581 commented 9 years ago
There is no posterization in 16-bit, and you just want to preserve the average value
using the dither which the 8-bit truncation (with or without rounding) is not doing.
 My prescription does that.  Anything that does more than change a 'round up' to a
'round down' over an average-8bit-truncation-remainder-weighted portion of pixels is
changing pixel values more than necessary, and adding unnecessary noise.

Reported by ejm.60657 on 2010-12-20 13:15:30

Beep6581 commented 9 years ago
Here is an example (Mathematica code) of what my suggestion will do (I made the quantization
steps 2.5 8-bit levels for ease of visibility; actual 8-bit implementation will look
even better):

http://theory.uchicago.edu/~ejm/pix/20d/posts/ojo/posterizationcode.png

Reported by ejm.60657 on 2010-12-20 15:48:49

Beep6581 commented 9 years ago
wow, this is amazing tool!

Reported by michaelezra000 on 2010-12-20 16:07:11

Beep6581 commented 9 years ago
To what are you referring, Michael?

Reported by ejm.60657 on 2010-12-20 16:09:04

Beep6581 commented 9 years ago
@Emil: if you replace the dither lookup table with (rand()&255) in my patch then the
result will be equal to your Mathematica test.

Reported by janrinze on 2010-12-20 16:17:15

Beep6581 commented 9 years ago
@Emil: Mathematica - does it really work as in the image you posted? snippets of code
with related image output? this must be very convenient setup for experimentation on
algorithms - I did not know Mathematica was that suitable for image processing.
Dithering result is no less amazing:)
Would be also interesting to see how this handles fine detail.

Reported by michaelezra000 on 2010-12-20 16:23:29

Beep6581 commented 9 years ago
@Michael: Yes, Mathematica was used extensively for the development of AMaZE, cfa autocorrection,
line denoise, pyramid denoise, etc.  Actually, I think the Mathematica prototype for
AMaZE is better than the current C code, it's just too slow to implement for the moment.
 I hope that if key elements can be done with OpenCL or CUDA that the original algorithm
can be put into RT.

But yes, Mathematica is very handy if slow.  Usually I'm testing on little crops of
images less than .5MP.  Haven't worked with it in a while so I'm a bit rusty.

Reported by ejm.60657 on 2010-12-20 16:41:31

Beep6581 commented 9 years ago
One additional suggestion -- if the dither is applied separately to each RGB channel,
that will increase chroma noise and luma noise in the process.  One may want to apply
the dither test to the luma channel 

Y = (0.299*r + 0.587*g + 0.114*b)

and on the basis of that, round all R,G,B values in the same direction (all up or all
down).  Just a suggestion, I haven't tested it.  Would be especially important in deeper
shadows, as in torger's test image.

Reported by ejm.60657 on 2010-12-20 18:04:45

Beep6581 commented 9 years ago
In audio, dither noise is shaped (roughly high pass filtered, or some more detailed
shape is used) to put most noise energy where the ear is least sensitive.

Perhaps there is some corresponding thing one could do here, to adapt to the eye? I'm
much better at psychoacoustics than psychovision (if there is such a word even) so
I don't have a suggestion to come with...

Reported by torger@ludd.ltu.se on 2010-12-20 21:48:17

Beep6581 commented 9 years ago
Well, the dither I suggested is at the pixel level; you can't get higher frequency than
that.  I think you're right too, that any attempt to smooth the dither pushes it to
lower frequencies where one risks that the result starts to look blotchy.

Reported by ejm.60657 on 2010-12-20 22:12:29

Beep6581 commented 9 years ago
I was more thinking like if the eye is more sensitive to green or luminance it would
be better to create dither only in R and B channels... just as an example, perhaps
the eye is more sensitive to chrominance noise than luminance and then it would be
better to change RGB together.

After reading a bit about image dithering I see that it is a bit different from audio,
so my talk about noise amplitudes etc earlier was probably a bit misleading, mid-tread
quantization for example is a time-domain thing, and just applying +1 .. -1 noise amplitude
that's good for audio will lead to more noise than necessary in imaging... just as
you (Emil) said in a previous comment. That it is too small in the current patch (or
some other problem) is however clear, since single color fields are still forming.

Of all the various image dithering algorithms some random dither approach is indeed
probably best in this particular case like most of you have suggested, since it will
look most as "a part of the photograph" then. That is, I think random in this case
is better than Floyd-Steinberg that otherwise seems to be the "king" among dither algorithms.
Floyd-Steinberg seems be more suited for cases when colors/quantisation steps are reduced
more drastically.

Reported by torger@ludd.ltu.se on 2010-12-21 07:59:02

Beep6581 commented 9 years ago
Yes I think dithering in Y channel of YCrCb is the way to go.

Some of what you are seeing with single-color fields is simply the blotchiness of randomness;
random strings actually have much more "clumping" than people think.  Error-diffusion
dithering such as Floyd-Steinberg would instead create regular patterns of stippling
in a  graduated way.  Depends on what you find more visually appealing.  Some of the
issue is the 2.5 steps that the dither has to bridge in my artificial example; if we
redo it with full 8-bit tonal depth the result is

http://theory.uchicago.edu/~ejm/pix/20d/posts/ojo/posterization256.png

and it's really hard to see how that is going to be a problem unless you are in the
camp that wants the more uniform stippling of F-S type dithering.

Reported by ejm.60657 on 2010-12-21 14:50:49

Beep6581 commented 9 years ago
Hmm... somehow the dithering code in the patch does not seem to run at all for me, no
wonder it does not make much difference. I wonder why... my own incompetence is so
far the prime suspect :-)

Reported by torger@ludd.ltu.se on 2010-12-21 16:05:21

Beep6581 commented 9 years ago
Here is Floyd-Steinberg dithering, I think it does actually look a bit better:

http://theory.uchicago.edu/~ejm/pix/20d/posts/ojo/FSdither.png

Reported by ejm.60657 on 2010-12-21 18:00:38

Beep6581 commented 9 years ago
Seems like the 8 bit dither code is never run -- a 16 bit image is generated (and thus
the 8 bit lab2rgb code is never run), and then in the last step saveTIFF a parameter
to that function states 8 bps, and crude truncation takes place.

So it seems to me like the patch does not work at all concerning dithering, or am I
missing some setting?

Reported by torger@ludd.ltu.se on 2010-12-21 20:54:45

Beep6581 commented 9 years ago
It seems like there are many issues being addressed at once here, without resolution.
 I am going to break out Jan's patch into a series of smaller patches for discrete
changes that we can discuss one by one, and decide whether they should be implemented.

Reported by ejm.60657 on 2010-12-23 01:22:34

Beep6581 commented 9 years ago
Here is a patch that addresses only the highlight recovery.  I rather like the effect;
the one change I can imagine is to either modify the point where the highlight rolloff
begins, or allow a user slider to control it.

Reported by ejm.60657 on 2010-12-23 01:52:33