And this may already happen but if the two models return the same document (happens to me a lot) the result should be discarded. I suspect the randomly selected model is adjusted given this, and as a result there are fewer head to head comparisons between top models?
We don't want people giving them different ratings here, but we also want to return something to the user. Not sure what the best take here is (remove the non-tie buttons / leave it as is and filter them out in the results)
IIRC our approach is to select two models then the user hits the query, then it goes and finds the docs, so the approach he describes wouldn't work offhand. So perhaps we leave it in as is and remove that data when doing analysis. Does this seems reasonable @Muennighoff?
I just encountered this situation when choosing the first example in Retrieval - Arena (battle) (query = Which test was devised to determine whether robots can think?). I think depending on bandwidth, we can
leave it as is and remove that data when doing analysis, or
automatically starting a new round and still save the data, or
automatically start a new round but not save the data
From https://x.com/n0riskn0r3ward/status/1818657065507672266:
We don't want people giving them different ratings here, but we also want to return something to the user. Not sure what the best take here is (remove the non-tie buttons / leave it as is and filter them out in the results)
IIRC our approach is to select two models then the user hits the query, then it goes and finds the docs, so the approach he describes wouldn't work offhand. So perhaps we leave it in as is and remove that data when doing analysis. Does this seems reasonable @Muennighoff?