Benchmarks were all run using knn_k=200, the default value in KNNBenchmarkModule. No wonder accuracy/F1 scores never went higher than 0.5-0.6 ☹️. Note that for kNN classifiers, choosing a values of $k$ this high could make it impossible to predict minority classes, which I highly suspect is what's going on here.
Benchmarks were all run using
knn_k=200
, the default value inKNNBenchmarkModule
. No wonder accuracy/F1 scores never went higher than 0.5-0.6 ☹️. Note that for kNN classifiers, choosing a values of $k$ this high could make it impossible to predict minority classes, which I highly suspect is what's going on here.Tasks: