Closed pverspeelt closed 2 years ago
I believe the flipping is "expected" behavior. What was wrong though was what was flipping, and this led me finding a notable bug where I was missing negative signs on both score_1
and score_2
(5617c7101d9376aed8d2ba6d6b3a762bded6b40a). So what you should see now is that alpha = 1
and alpha = 0.8
are consistent in terms of direction, while alpha = 1.2
shows the flipping.
It's a bit tough to think intuitively about why it flips, but it has to do with the alpha - 1
term that's in the denominator. Because that term goes from negative to positive as alpha
goes from below 1 to above 1, the overall entropy seems to go from positive to negative (given 5617c7101d9376aed8d2ba6d6b3a762bded6b40a). A
and H
are always contributing to those entropies in some direction. Because the overall direction of the entropies flips with alpha, I believe that's why the directions of A
and H
flip when everything else is held the constant.
When you use the entropy function with an
alpha < 1
the data on the graph is flipped, compared to all the other functions. Usingalpha = 1
oralpha > 1
will show the graph corresponding the input. In the cookbook you solved this by swithing the titles of the graph around, see Cookbook Tsallis Entropy, but for a casual user who always uses the same order of input values, this is not immediately obvious. The only reason I spotted this is because I'm translating this into R code and was paying close attention to how individual words are calculated and shown on the graphs.See how the value of "A" switches when alpha is smaller than 1.
A simple solution is to multiply the values in get_entropy_type_scores by -1 when alpha is smaller than 1. Something like below should do the trick. (or take the absolute value if this is the case). Unless I'm missing something about the explainability of the Tsallis calculation.