olavolav / uniplot

Lightweight plotting to the terminal. 4x resolution via Unicode.
MIT License
356 stars 16 forks source link

Label choice does not always seem optimal #17

Closed olavolav closed 1 year ago

olavolav commented 1 year ago

In the example of #15 actually it is curious that the .5 labels are not chosen by the algorithm.

>>> from uniplot.axis_labels.extended_talbot_labels import extended_talbot_labels
>>> extended_talbot_labels(x_min=-1.846526999372103, x_max=-0.1651564799721942, available_space=17, vertical_direction=True, verbose=True).render()
### exponent = 0 ###
(...)
### exponent = -1 ###
### j = 1 ###
### i = 0, q = 1 ###
Testing labels: [-1.8 -1.7 -1.6 -1.5 -1.4 -1.3 -1.2 -1.1 -1.  -0.9 -0.8 -0.7 -0.6 -0.5
 -0.4 -0.3 -0.2] => simplicity = 0.19999999999999996, coverage = 0.9940240106065467, density = -4.666666666666667, grid_alignment unknown => score_approx = -0.8514939973483636
Testing labels: [-1.8 -1.7 -1.6 -1.5 -1.4 -1.3 -1.2 -1.1 -1.  -0.9 -0.8 -0.7 -0.6 -0.5
 -0.4 -0.3 -0.2] => simplicity = 0.19999999999999996, coverage = 0.9940240106065467, density = -4.666666666666667, grid_alignment => 1, score = -0.8514939973483636
=> New best score 😀
### i = 1, q = 5 ###
Testing labels: [-1.4 -0.9 -0.4] => simplicity = 0.0, coverage = 0.549810354275108, density = 0.0, grid_alignment unknown => score_approx = 0.337452588568777
Testing labels: [-1.4 -0.9 -0.4] => simplicity = 0.0, coverage = 0.549810354275108, density = 0.0, grid_alignment => 1, score = 0.337452588568777
=> New best score 😀
(...)
### j = 5 ###
### i = 0, q = 1 ###
Testing labels: [-1.4 -0.9 -0.4] => simplicity = -3.8, coverage = 0.549810354275108, density = 0.0, grid_alignment unknown => score_approx = -1.5625474114312228
Testing labels: [-1.8 -1.3 -0.8 -0.3] => simplicity = -3.8, coverage = 0.9640122259435896, density = -0.33333333333333326, grid_alignment unknown => score_approx = -1.5589969435141025
Testing labels: [-1.7 -1.2 -0.7 -0.2] => simplicity = -3.8, coverage = 0.9598794169078467, density = -0.33333333333333326, grid_alignment unknown => score_approx = -1.5600301457730383
Testing labels: [-1.6 -1.1 -0.6] => simplicity = -3.8, coverage = 0.5580759723465938, density = 0.0, grid_alignment unknown => score_approx = -1.5604810069133515
Testing labels: [-1.5 -1.  -0.5] => simplicity = -3.8, coverage = 0.5893162650552402, density = 0.0, grid_alignment unknown => score_approx = -1.55267093373619

It seems odd that the simplicity for the intuitive choice of [-1.5, -1.0, -0.5] is negative.

Let's investigate, could be a bug.

olavolav commented 1 year ago

simplified example:

>>> extended_talbot_labels(x_min=0.14, x_max=1.9, available_space=60, vertical_direction=False, verbose=False).render()
['               0.6              1.1              1.6']

The result should be the labels 0.5, 1 and 1.5.