Closed dev-err418 closed 3 months ago
Hi @dev-err418, thank you for your feedback !
Which version of neural-cherche are you using ?
Could you try installing neural-cherche latest version ?
pip install "neural_cherche[eval]"==1.4.3
Here is a sample script to run evaluation and it run fine with the last version, BM25 and scifact dataset:
from neural_cherche import models, retrieve, utils
documents, queries, qrels = utils.load_beir(
"scifact",
split="test",
)
retriever = retrieve.BM25(
key="id",
on=["title", "text"],
)
documents_embeddings = retriever.encode_documents(
documents=documents,
)
retriever = retriever.add(documents_embeddings=documents_embeddings)
queries_embeddings = retriever.encode_queries(
queries=queries,
)
scores = retriever(
queries_embeddings=queries_embeddings,
k=10,
tqdm_bar=True,
batch_size=1024,
)
scores = utils.evaluate(
scores=scores,
qrels=qrels,
queries=queries,
metrics=["ndcg@10"] + [f"hits@{k}" for k in range(1, 10)],
)
print(scores)
Hey @raphaelsty,
Thanks a lot for your feedback ! I was actually using the right version of the packages (1.4.3) and your code snippet is working now... I dont really know what i did wrong, but thanks for your help !
Hey there !
I'm trying to run the "evaluate" demo code written in the documentation but I run into this error :
And when looking in the library code, at the
neural_cherche/utils/evaluate.py
path, we can see that theadd_duplicates
function has 2 params : queries and scores but not 'results'.Otherwise, thanks for the library, excellent work 👍 !