Open robobenklein opened 1 year ago
converting individual comparisons based on A>B likelyhood to a ranking: https://stackoverflow.com/a/17701105/2375851
ELO estimation algorithms could be used to fit individual items likelyhood to be greater to a range of scores: https://chess.stackexchange.com/questions/37352/what-sets-the-absolute-value-of-players-elo-rating
https://core.ac.uk/download/pdf/226942134.pdf
From pairwise comparisons and rating to a unified quality scale
Using Pairwise Comparisons to Determine Consumer Preferences in Hotel Selection https://www.mdpi.com/2227-7390/10/5/730
super math-heavy
Gradient descent: while initially sounds like a valid option, the size of the set of possible comparisons (N samples compared to N samples) makes the computation expensive and the iterations very slow.
parallelization of similar algorithms in deep learning could probably be useful to speed this up and perhaps narrow the number of samples needed in each comparison
https://shashank-ojha.github.io/ParallelGradientDescent/ (conclusion: GPU-sized parallelism didn't help, 8-16 cpu cores was their limit for speedup)
Inputs: a series of keys
A
vsB
, ranging from -1 to 1, specifying A or B to be strongly/weakly greater/lesser than the other.Outputs: the most optimal assignment of values to keys where the comparisons are most strongly represented in the sorted ordering
Given the size of my own library, there are about 35k possible 30-second samplings, and almost 5k tracks, so the performance should be good in order to be useful.