scikit-hubness
provides tools for the analysis and
reduction of hubness in high-dimensional data.
Hubness is an aspect of the curse of dimensionality
and is detrimental to many machine learning and data mining tasks.
The skhubness.analysis
and skhubness.reduction
packages allow to
scikit-learn
due to compatible data structuresThe skhubness.neighbors
package provides approximate nearest neighbor (ANN)
search. This is compatible with scikit-learn classes and functions relying
on neighbors graphs due to compliance with KNeighborsTransformer APIs
and data structures. Using ANN can speed up many scikit-learn classification,
clustering, embedding and other methods, including:
scikit-hubness
thus provides
which allows for fast hubness-reduced neighbor search in large datasets (tested with >1M objects).
Make sure you have a working Python3 environment (at least 3.8).
Use pip to install the latest stable version of scikit-hubness
from PyPI:
pip install scikit-hubness
NOTE: v0.30 is currently under development and not yet available on PyPI. Install from sources to obtain the bleeding edge version.
Dependencies are installed automatically, if necessary.
scikit-hubness
is based on the SciPy-stack, including numpy
, scipy
and scikit-learn
.
Approximate nearest neighbor search and approximate hubness reduction
additionally require at least one of the following packages:
nmslib
for hierachical navigable small-world graphs in skhubness.neighbors.NMSlibTransformer
ngtpy
for nearest neighbor graphs (ANNG, ONNG) in skhubness.neighbors.NGTTransformer
puffinn
for locality-sensitive hashing in skhubness.neighbors.PuffinnTransformer
annoy
for random projection forests in skhubness.neighobrs.AnnoyTransformer
For more details and alternatives, please see the Installation instructions.
Additional documentation is available online: http://scikit-hubness.readthedocs.io/en/latest/index.html
See the changelog to find what's new in the latest package version.
Users of scikit-hubness
may want to
The following example shows all these steps for an example dataset
from the text domain (dexter). (Please make sure you have installed scikit-hubness
).
from sklearn.model_selection import cross_val_score
from sklearn.neighbors import KNeighborsClassifier, KNeighborsTransformer
from skhubness import Hubness
from skhubness.data import load_dexter
from skhubness.neighbors import NMSlibTransformer
from skhubness.reduction import MutualProximity
# load the example dataset 'dexter' that is embedded in a
# high-dimensional space, and could, thus, be prone to hubness.
X, y = load_dexter()
print(f'X.shape = {X.shape}, y.shape = {y.shape}')
# assess the actual degree of hubness in dexter
hub = Hubness(k=10, metric='cosine')
hub.fit(X)
k_skew = hub.score()
print(f'Skewness = {k_skew:.3f}')
# additional hubness indices are available, for example:
hub = Hubness(k=10, return_value="all", metric='cosine')
scores = hub.fit(X).score()
print(f'Robin hood index: {scores.get("robinhood"):.3f}')
print(f'Antihub occurrence: {scores.get("antihub_occurrence"):.3f}')
print(f'Hub occurrence: {scores.get("hub_occurrence"):.3f}')
# There is considerable hubness in dexter. Let's see, whether
# hubness reduction can improve kNN classification performance.
# We first create a kNN graph:
knn = KNeighborsTransformer(n_neighbors=50, metric="cosine")
# Alternatively, create an approximate KNeighborsTransformer, e.g.,
# knn = NMSlibTransformer(n_neighbors=50, metric="cosine")
kneighbors_graph = knn.fit_transform(X, y)
# vanilla kNN without hubness reduction
clf = KNeighborsClassifier(n_neighbors=5, metric='precomputed')
acc_standard = cross_val_score(clf, kneighbors_graph, y, cv=5)
# kNN with hubness reduction (mutual proximity) reuses the
# precomputed graph and works in sklearn workflows:
mp = MutualProximity(method="normal")
mp_graph = mp.fit_transform(kneighbors_graph)
acc_mp = cross_val_score(clf, mp_graph, y, cv=5)
print(f'Accuracy (vanilla kNN): {acc_standard.mean():.3f}')
print(f'Accuracy (kNN with hubness reduction): {acc_mp.mean():.3f}')
# Accuracy was considerably improved by mutual proximity.
# Did it actually reduce hubness?
mp_scores = hub.fit(mp_graph).score()
print(f'k-skewness after MP: {mp_scores.get("k_skewness"):.3f} '
f'(reduction of {scores.get("k_skewness") - mp_scores.get("k_skewness"):.3f})')
print(f'Robinhood after MP: {mp_scores.get("robinhood"):.3f} '
f'(reduction of {scores.get("robinhood") - mp_scores.get("robinhood"):.3f})')
Check the User Guide for additional example usage.
The developers of scikit-hubness
welcome all kinds of contributions!
Get in touch with us if you have comments,
would like to see an additional feature implemented,
would like to contribute code or have any other kind of issue.
Don't hesitate to file an issue
here on GitHub.
For more information about contributing, please have a look at the contributors guidelines.
(c) 2018-2022, Roman Feldbauer
-2018: Austrian Research Institute for Artificial Intelligence (OFAI) and
-2021: University of Vienna, Division of Computational Systems Biology (CUBE)
2021-: Independent researcher
Contact: <sci@feldbauer.org>
If you use scikit-hubness
in your scientific publication, please cite:
@Article{Feldbauer2020,
author = {Roman Feldbauer and Thomas Rattei and Arthur Flexer},
title = {scikit-hubness: Hubness Reduction and Approximate Neighbor Search},
journal = {Journal of Open Source Software},
year = {2020},
volume = {5},
number = {45},
pages = {1957},
issn = {2475-9066},
doi = {10.21105/joss.01957},
}
To specifically acknowledge approximate hubness reduction, please cite:
@INPROCEEDINGS{8588814,
author={R. {Feldbauer} and M. {Leodolter} and C. {Plant} and A. {Flexer}},
booktitle={2018 IEEE International Conference on Big Knowledge (ICBK)},
title={Fast Approximate Hubness Reduction for Large High-Dimensional Data},
year={2018},
volume={},
number={},
pages={358-367},
keywords={computational complexity;data analysis;data mining;mobile computing;public domain software;software packages;mobile device;open source software package;high-dimensional data mining;fast approximate hubness reduction;massive mobility data;linear complexity;quadratic algorithmic complexity;dimensionality curse;Complexity theory;Indexes;Estimation;Data mining;Approximation algorithms;Time measurement;curse of dimensionality;high-dimensional data mining;hubness;linear complexity;interpretability;smartphones;transport mode detection},
doi={10.1109/ICBK.2018.00055},
ISSN={},
month={Nov},}
The technical report Fast approximate hubness reduction for large high-dimensional data
is available at OFAI.
Local and Global Scaling Reduce Hubs in Space
, Journal of Machine Learning Research 2012,
Link.
A comprehensive empirical comparison of hubness reduction in high-dimensional spaces
,
Knowledge and Information Systems 2018, DOI.
scikit-hubness
is licensed under the terms of the BSD-3-Clause license.
Note: Individual files contain the following tag instead of the full license text.
SPDX-License-Identifier: BSD-3-Clause
This enables machine processing of license information based on the SPDX License Identifiers that are here available: https://spdx.org/licenses/
Parts of scikit-hubness
adapt code from scikit-learn
.
We thank all the authors and contributors of this project
for the tremendous work they have done.
PyVmMonitor is being used to support the development of this free open source software package. For more information go to http://www.pyvmmonitor.com