Open azizkayumov opened 6 months ago
For future readers and users of outlier scores in Python HDBSCAN, let me report the following simplest test case that Python HDBSCAN fails to correctly compute outlier scores: As you can see from the plot, the point ranked 1 should be ranked second, while the point ranked 2 should have been ranked first.
This happens due to the bug related to how max lambda values of clusters are calculated as I reported above. To reproduce the plot, please run the following:
import hdbscan
import numpy as np
import matplotlib.pyplot as plt
# Step 1: Example data
data = [
# cluster 1 (formed at eps = √2)
[1, 1],
[1, 2],
[2, 1],
[2, 2],
# cluster 2 (formed at eps = √2)
[4, 1],
[4, 2],
[5, 1],
[5, 2],
# cluster 3 (formed at eps = √2)
[9, 1],
[9, 2],
[10, 1],
[10, 2],
[11, 1],
[11, 2],
[2, 5], # outlier1: cd = √13, joins cluster1 and cluster2 at eps = √13
[10, 8], # outlier2: cd = √37, joins the root cluster at eps = √37
]
# Then the outlier scores should be:
# glosh(outlier1) = 1 - √2 / √13 = 0.60776772972
# glosh(outlier2) = 1 - √2 / √37 = 0.76750472251
# But, Python HDBSCAN gives the following outlier scores:
# glosh(outlier1) = 1 - 2 / √13 = 0.44529980377 (cluster1 and cluster2 join at eps = 2, ignoring cluster1 and cluster2 are both formed at eps = √2)
# glosh(outlier2) = 1 - 4 / √37 = 0.34240405077 (cluster3 and cluster1 && cluster2 join at eps = 4)
# Step 2: Compute the outlier scores
k = 4
clusterer = hdbscan.HDBSCAN(
alpha=1.0,
approx_min_span_tree=False,
gen_min_span_tree=True,
metric='euclidean',
min_cluster_size=k,
min_samples=k,
allow_single_cluster=False,
match_reference_implementation=True)
clusterer.fit(data)
mst = clusterer.single_linkage_tree_.to_numpy()
mst_weight = sum([x[2] for x in mst])
print("MST weight: ", mst_weight) # Should be 30.83044942
# Step 3. Plot the data and the outlier scores
outlier_scores = clusterer.outlier_scores_
plt.scatter([x[0] for x in data], [x[1] for x in data], s=25, c=outlier_scores, cmap='viridis')
plt.colorbar()
# Print outlier scores
for (i, score) in enumerate(outlier_scores):
print(f"Outlier {i+1} score: {score}")
# Step 4: Assign rankings and plot top outliers
indices = [i for i in range(len(outlier_scores))]
indices.sort(key=lambda x: outlier_scores[x])
ranks = indices[-30:]
ranks = reversed(ranks)
for i, idx in enumerate(ranks):
plt.text(data[idx][0], data[idx][1], str(i+1), fontsize=10, color='black')
plt.title("PyHDBSCAN: outlier scores & rankings")
plt.axis('equal')
plt.show()
As the project is being moved to sklearn's main library, it would be good to fix this misinterpretation of GLOSH altogether or at least warn the users to not rely on outlier_scores
for their data analysis.
I am curious if GLOSH implementation in this repository correctly follows the paper's definition of "outlierness". According to HDBSCAN* paper (R. J. G. B. Campello et al. 2015, page 25):
Looking at the
max_lambdas
function for computingεmax(xi)
, I think the original paper's explanation (the bold italic text above) is not correctly interpreted. It seems themax_lambdas
function only considers the death of a parent cluster (not considering the latest death of its subclusters).To reproduce this issue, please run the following code:
This should show the following plot:
As you can see from the plot, the outlier scores assigned to the data points between clusters (please find the yellow points between the clusters!) do not seem to look "natural" outliers compared to other outliers. From my own understanding of the paper, we want the far away outlier points to have higher scores (just like looking at a topographical map) and the points between clusters to have lower outlier scores. I think GLOSH is supposed to give us this instead:
It seems a fix of GLOSH may also be of help for #116. I would like to PR but I seem to have issues with building the code for now. Please let me know if there is something I might be missing.