Closed liboyin closed 9 years ago
Libo is doing it.
I've implemented and tested it. The result is surprisingly bad with <5% accuracy. I'm going to try a few different metric functions, both on this algorithm and the original kNN.
New voting scheme with Euclidean distance achieved 91% accuracy. Woohoo!
Currently, each SIFT descriptor from a training image is assumed to be representative for that class. Each SIFT descriptor from a test image is classified with kNN, and votes for the classification of the test image. Here I propose another voting scheme. We consider a class as the set of all SIFT descriptors of all training images of that class. For a test image, we compare each of its SIFT descriptor to each SIFT descriptor of every class. We assign each class a score that is the smallest distance between any of its SIFT descriptors to the current SIFT descriptor from the test image. The test image is classified to the class with the smallest sum of such scores.