hsluv / hsluv-python

Python implementation of HSLuv (revision 4)
https://pypi.org/project/hsluv/
MIT License
140 stars 15 forks source link

current version returns rgb values outside range 0-1 #7

Closed Telemin closed 4 years ago

Telemin commented 8 years ago

e.g

>>> import husl
>>> husl.__version__
'4.0.3'
>>> husl.huslp_to_rgb(1,40,100)
[0.9999999999999524, 1.000000000000017, 1.0000000000000127]
>>> husl.husl_to_rgb(1,40,100)
[0.9999999999999524, 1.000000000000017, 1.0000000000000127]'
briend commented 6 years ago

Probably not a "bug" really. I've seen another project do this and the reasoning is you might want to do something different besides clamping to 0-1. So, it should be easy enough to clamp this on your own if that's ok for your needs.

Ti-tanium commented 5 years ago

image Well, I think it do have a bug. No matter what hpluv value I pass to 'hsluv_to_rgb', it all return the same result:[0.9999999999999524, 1.000000000000017, 1.0000000000000127]'

boronine commented 5 years ago

Hi @Ti-tanium, based on your snippet it's hard to say what's happening. Can you simplify it?

I did a quick test and the function does seem to work:

In [1]: import hsluv

In [2]: hsluv.hpluv_to_rgb([180, 50, 50])
Out[2]: [0.3368007881623663, 0.4962152283736628, 0.47559596639467644]

In [3]: hsluv.hpluv_to_rgb([180, 60, 60])
Out[3]: [0.3715417440388882, 0.6093313538003646, 0.5801023626733199]
MoritzMaxeiner commented 5 years ago

@boronine: Hi, afaict @Ti-tanium's problem stems from the fact that rgb_to_hpluv takes its input RGB values in the interval [0,1], not [0,255]. I'd have expected rgb_to_hpluv to raise an exception if it receives values outside the expected range, though.

boronine commented 5 years ago

@MoritzMaxeiner makes sense, thank you. So there are two issues here:

  1. Values returned by hsluv are not "clamped" to be within 0-1 range
  2. hsluv does not raise an error when it receives numbers outside of 0-1 range

Although both are valid issues I decided to leave it as is.

  1. In practice, the return value from HSLuv will either be used as a 0-1 floating point value (in this case, floating point error is to be expected), or it will be turned into an integer, e.g. 1-100 in which case the floating point error will be erased in the process of rounding.
  2. Similarly, hsluv expects floating point error on the inputs as well, so I wouldn't be checking that the inputs are >0, for example. Of course, I could be more lenient checking for >-0.001, but I feel that this is too arbitrary and not worth maintaining and testing.

If anyone strongly disagrees feel free to re-open.

hartwork commented 4 years ago

The docs clearly say "between 0 and 1" so I'm surprised to hear that returning 1.000000000000017 is considered acceptable. When I saw over 100% saturation a few minutes ago, I was really surprised:

In [25]: hsluv.hex_to_hsluv('#000001')[1]
Out[25]: 100.00000000000082

If I need a wrapper to sanitize the returns from this library against their official contract, isn't that a bug in the library?

boronine commented 4 years ago

@hartwork regarding documentation, you make a good point. At the very least there should be a notice about floating point error. I also agree that the library shouldn't demand a wrapper. What does the wrapper help you achieve? In other words, what kind of code are you plugging in HSLuv outputs that is sensitive to floating point error?

hartwork commented 4 years ago

I'm not plugging it into something yet — I first heard about HSLuv only yesterday and wanted to see it in action. I get that often…

the floating point error will be erased in the process of rounding

…but — with things like malicious input, input validation and library interfaces in mind — I have a hard time with return values going outside of expectable bounds.

Maybe there could be flag to choose between faster and more correct results, with more correct being the default, if there is worry about losing performance too much?

boronine commented 4 years ago

@hartwork what makes me hesitate is that a clamped output might actually be less correct mathematically since it distorts the distribution of outputs. I'm guessing it might cost a binary digit of precision?

I'm not convinced about clamping, but I did add a notice to the README. I'll keep this issue open for discussion.

hartwork commented 4 years ago

@hartwork what makes me hesitate is that a clamped output might actually be less correct mathematically since it distorts the distribution of outputs. I'm guessing it might cost a binary digit of precision?

I get what you mean by distortion (— it reminds me of modulo bias).

Dividing by a to-be-determined correction factor would not work to address the concern about distortion?

In [7]: hsluv.hex_to_hsluv('#0000ff')[1] / 1.0000000000000082
Out[7]: 100.0

I wonder, if we were expecting round-down conversion to integer, if we'd want [0.0, 101.0[ float output to end up with unbiased [0, 100] integer results.

scratchmex commented 4 years ago

@hartwork regarding documentation, you make a good point. At the very least there should be a notice about floating point error. I also agree that the library shouldn't demand a wrapper. What does the wrapper help you achieve? In other words, what kind of code are you plugging in HSLuv outputs that is sensitive to floating point error?

https://github.com/mwaskom/seaborn/issues/1994. Exception raised from matplotlib: ValueError: RGBA values should be within 0-1 range

boronine commented 4 years ago

@scratchmex Excellent example, thank you!

I'm convinced we should add clamping now, especially since we advertise Seaborn on our example page. I don't have the time to do it right now, but I would happily merge a PR where clamping is applied on the output. Probably the cleanest way to do it is to have private and public API where unclamped function names are preceded with an underscore.

scratchmex commented 4 years ago

I'll be working on this. My idea is to use define a function with functools.partial to round the return values in the conversion functions. I thought in functools for speed.

The underscores are a great idea.

hartwork commented 4 years ago

That's an interesting turn — thanks @scratchmex!