I am trying to use this repo to create a ranking for a set of queries on a document collection.
That means that for each query, all documents should be ranked (ie. the similarities between query and document embeddings are computed, either using FAISS/ANN or exhaustively).
Is this something that this repo supports? or is this repo purely intended for training models?
Also, what is the difference between infer_eval vs. eval? I see that the former requires to be run/called from the training script
Hi,
I am trying to use this repo to create a ranking for a set of queries on a document collection. That means that for each query, all documents should be ranked (ie. the similarities between query and document embeddings are computed, either using FAISS/ANN or exhaustively). Is this something that this repo supports? or is this repo purely intended for training models?
Also, what is the difference between
infer_eval
vs.eval
? I see that the former requires to be run/called from the training script