embeddings-benchmark / mteb

MTEB: Massive Text Embedding Benchmark
https://arxiv.org/abs/2210.07316
Apache License 2.0
1.82k stars 241 forks source link

kNN classification accuracy task (P2P) #172

Closed dkobak closed 3 months ago

dkobak commented 10 months ago

Hi, thanks for the great project!

Currently MTEB has linear classification tasks and kNN-based retrieval tasks. What I feel is missing are kNN classification tasks. For example, I am thinking of possible tasks like BiorxivP2P-kNN-classification using class labels that you have in BiorxivP2P.

kNN graphs are often used for unsupervised learning (dimensionality reduction using t-SNE/UMAP or clustering using methods like Louvain/Leiden), so it seems reasonable to ask "How good is the kNN graph?", and kNN accuracy is one way to quantify that.

This feels to me quite different from the linear classification task (inherently supervised), because kNN graph is mostly used in the unsupervised learning applications, so "kNN graph quality" is a metric which is very relevant for unsupervised learning.

It also feels different from the retrieval task, which is usually asymmetric and S2P (at least currently in MTEB), whereas kNN classification can be P2P and is symmetric.

Is this something that MTEB developers would be interested in adding / open to consider as a PR? Or do you feel these aspects are sufficiently covered by the existing tasks already?

loicmagne commented 10 months ago

I think you can already use knn classifier to evaluate classification tasks, with method="kNN": https://github.com/embeddings-benchmark/mteb/blob/fb90c022e11834ae6605f5bbb0a79af701793a96/mteb/abstasks/AbsTaskClassification.py#L75

However this is just a different way to train a classifier, there is no particular "kNN graph quality" metrics. You also cannot use knn classification evaluation on clustering datasets like BiorxivP2P, for this we would need to create train sets for clustering data. I don't know if the samples within clustering datasets would provide meaningful insights when used for classification, and it would require a clear methodology to ensure that the kNN graph quality metrics correctly reflects unsupervised learning performance

Is method="kNN" satisfactory, or do you think it would be useful to add dedicated evaluations for kNN graphs ?

dkobak commented 10 months ago

Thanks @loicmagne for your reply!

I think you can already use knn classifier to evaluate classification tasks, with method="kNN"

Oh, that's cool, I did not realize that.

Is method="kNN" satisfactory, or do you think it would be useful to add dedicated evaluations for kNN graphs ?

I am not sure. In principle one can compute kNN accuracy on the entire dataset without any train/test split by explicitly constructing the full kNN graph (it's implicitly a leave-one-out procedure):

from sklearn.neighbors import NearestNeighbors
from scipy.stats import mode

def knn_accuracy_loocv(X, y, n_neighbors=10):
    neigh = NearestNeighbors(n_neighbors=n_neighbors).fit(X)
    knn = neigh.kneighbors(return_distance=False) # returns kNN graph of X
    yhat = mode(y[knn], axis=1).mode.flatten()    # kNN classifier predictions
    return np.mean(yhat == y)                     # kNN accuracy

This is nice because it evaluates kNN accuracy over the entire dataset. But in practice running KNeighborsClassifier on a train/test split would yield close results. So it's not a huge difference.

You also cannot use knn classification evaluation on clustering datasets like BiorxivP2P, for this we would need to create train sets for clustering data.

That's perhaps the biggest problem right now. Datasets like BiorxivP2P seem to me to be very good candidates for this metric. Would it make sense to create train/test splits for all of them? Then any classification metric could be run on them, including kNN. Can't one simply create train/test split at runtime and fix the random seed so that it's deterministic?

Alternative option would be to implement kNN graph evaluation like I suggested above without any train/test split.

loicmagne commented 10 months ago

Thanks for your reply, those are good options. I don't know how it should be integrated with the MTEB lib, new task? new evaluator? @Muennighoff what do you think?

Muennighoff commented 10 months ago

Really cool discussion. I think it'd be interesting to have it as an option (while the default remains as is)

dkobak commented 10 months ago

I think it'd be interesting to have it as an option (while the default remains as is)

Hi @Muennighoff, I'm not quite sure what you mean here. To have it as an option where exactly?

Muennighoff commented 10 months ago

If you think it's better as a standalone evaluator (not an option for one of the existing ones) that's fine too I think

KennethEnevoldsen commented 3 months ago

Seems like this issue has gone stale. Will close it for now but do feel free to re-open it